00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2380 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3645 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.173 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.296 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.296 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.462 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.473 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.487 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.487 > git config core.sparsecheckout # timeout=10 00:00:06.498 > git read-tree -mu HEAD # timeout=10 00:00:06.515 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.539 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.539 > git rev-list --no-walk cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18 # timeout=10 00:00:06.638 [Pipeline] Start of Pipeline 00:00:06.652 [Pipeline] library 00:00:06.654 Loading library shm_lib@master 00:00:06.654 Library shm_lib@master is cached. Copying from home. 00:00:06.668 [Pipeline] node 00:00:06.675 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.676 [Pipeline] { 00:00:06.685 [Pipeline] catchError 00:00:06.689 [Pipeline] { 00:00:06.699 [Pipeline] wrap 00:00:06.706 [Pipeline] { 00:00:06.713 [Pipeline] stage 00:00:06.714 [Pipeline] { (Prologue) 00:00:06.727 [Pipeline] echo 00:00:06.728 Node: VM-host-WFP7 00:00:06.733 [Pipeline] cleanWs 00:00:06.741 [WS-CLEANUP] Deleting project workspace... 00:00:06.741 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.749 [WS-CLEANUP] done 00:00:06.992 [Pipeline] setCustomBuildProperty 00:00:07.082 [Pipeline] httpRequest 00:00:07.739 [Pipeline] echo 00:00:07.741 Sorcerer 10.211.164.20 is alive 00:00:07.750 [Pipeline] retry 00:00:07.751 [Pipeline] { 00:00:07.759 [Pipeline] httpRequest 00:00:07.763 HttpMethod: GET 00:00:07.763 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.764 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.786 Response Code: HTTP/1.1 200 OK 00:00:07.786 Success: Status code 200 is in the accepted range: 200,404 00:00:07.787 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.287 [Pipeline] } 00:00:12.305 [Pipeline] // retry 00:00:12.314 [Pipeline] sh 00:00:12.598 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.614 [Pipeline] httpRequest 00:00:12.979 [Pipeline] echo 00:00:12.981 Sorcerer 10.211.164.20 is alive 00:00:12.991 [Pipeline] retry 00:00:12.993 [Pipeline] { 00:00:13.009 [Pipeline] httpRequest 00:00:13.014 HttpMethod: GET 00:00:13.014 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:13.015 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:13.027 Response Code: HTTP/1.1 200 OK 00:00:13.028 Success: Status code 200 is in the accepted range: 200,404 00:00:13.028 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:48.138 [Pipeline] } 00:01:48.157 [Pipeline] // retry 00:01:48.164 [Pipeline] sh 00:01:48.444 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:51.012 [Pipeline] sh 00:01:51.294 + git -C spdk log --oneline -n5 00:01:51.294 d47eb51c9 bdev: fix a race between reset start and complete 00:01:51.294 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:51.294 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:51.294 4bcab9fb9 correct kick for CQ full case 00:01:51.294 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:51.315 [Pipeline] withCredentials 00:01:51.326 > git --version # timeout=10 00:01:51.343 > git --version # 'git version 2.39.2' 00:01:51.360 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:51.363 [Pipeline] { 00:01:51.372 [Pipeline] retry 00:01:51.374 [Pipeline] { 00:01:51.391 [Pipeline] sh 00:01:51.673 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:51.993 [Pipeline] } 00:01:52.011 [Pipeline] // retry 00:01:52.016 [Pipeline] } 00:01:52.033 [Pipeline] // withCredentials 00:01:52.043 [Pipeline] httpRequest 00:01:52.406 [Pipeline] echo 00:01:52.408 Sorcerer 10.211.164.20 is alive 00:01:52.419 [Pipeline] retry 00:01:52.421 [Pipeline] { 00:01:52.438 [Pipeline] httpRequest 00:01:52.443 HttpMethod: GET 00:01:52.444 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:52.445 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:52.453 Response Code: HTTP/1.1 200 OK 00:01:52.453 Success: Status code 200 is in the accepted range: 200,404 00:01:52.454 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.855 [Pipeline] } 00:02:01.872 [Pipeline] // retry 00:02:01.881 [Pipeline] sh 00:02:02.162 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.545 [Pipeline] sh 00:02:03.826 + git -C dpdk log --oneline -n5 00:02:03.826 caf0f5d395 version: 22.11.4 00:02:03.826 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:03.826 dc9c799c7d vhost: fix missing spinlock unlock 00:02:03.826 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:03.826 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:03.845 [Pipeline] writeFile 00:02:03.862 [Pipeline] sh 00:02:04.144 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:04.157 [Pipeline] sh 00:02:04.440 + cat autorun-spdk.conf 00:02:04.440 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.440 SPDK_TEST_NVME=1 00:02:04.440 SPDK_TEST_FTL=1 00:02:04.440 SPDK_TEST_ISAL=1 00:02:04.440 SPDK_RUN_ASAN=1 00:02:04.440 SPDK_RUN_UBSAN=1 00:02:04.440 SPDK_TEST_XNVME=1 00:02:04.440 SPDK_TEST_NVME_FDP=1 00:02:04.440 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.440 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:04.440 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.446 RUN_NIGHTLY=1 00:02:04.449 [Pipeline] } 00:02:04.464 [Pipeline] // stage 00:02:04.483 [Pipeline] stage 00:02:04.486 [Pipeline] { (Run VM) 00:02:04.500 [Pipeline] sh 00:02:04.782 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:04.782 + echo 'Start stage prepare_nvme.sh' 00:02:04.782 Start stage prepare_nvme.sh 00:02:04.782 + [[ -n 5 ]] 00:02:04.782 + disk_prefix=ex5 00:02:04.782 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:04.782 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:04.782 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:04.782 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.782 ++ SPDK_TEST_NVME=1 00:02:04.782 ++ SPDK_TEST_FTL=1 00:02:04.782 ++ SPDK_TEST_ISAL=1 00:02:04.782 ++ SPDK_RUN_ASAN=1 00:02:04.782 ++ SPDK_RUN_UBSAN=1 00:02:04.782 ++ SPDK_TEST_XNVME=1 00:02:04.782 ++ SPDK_TEST_NVME_FDP=1 00:02:04.782 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.782 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:04.782 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.782 ++ RUN_NIGHTLY=1 00:02:04.782 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:04.782 + nvme_files=() 00:02:04.782 + declare -A nvme_files 00:02:04.782 + backend_dir=/var/lib/libvirt/images/backends 00:02:04.782 + nvme_files['nvme.img']=5G 00:02:04.782 + nvme_files['nvme-cmb.img']=5G 00:02:04.782 + nvme_files['nvme-multi0.img']=4G 00:02:04.782 + nvme_files['nvme-multi1.img']=4G 00:02:04.782 + nvme_files['nvme-multi2.img']=4G 00:02:04.782 + nvme_files['nvme-openstack.img']=8G 00:02:04.782 + nvme_files['nvme-zns.img']=5G 00:02:04.782 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:04.782 + (( SPDK_TEST_FTL == 1 )) 00:02:04.782 + nvme_files["nvme-ftl.img"]=6G 00:02:04.782 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:04.782 + nvme_files["nvme-fdp.img"]=1G 00:02:04.782 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:04.782 + for nvme in "${!nvme_files[@]}" 00:02:04.782 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:04.782 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:04.782 + for nvme in "${!nvme_files[@]}" 00:02:04.782 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:02:04.782 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:04.782 + for nvme in "${!nvme_files[@]}" 00:02:04.782 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:04.783 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:04.783 + for nvme in "${!nvme_files[@]}" 00:02:04.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:04.783 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:04.783 + for nvme in "${!nvme_files[@]}" 00:02:04.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:04.783 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:04.783 + for nvme in "${!nvme_files[@]}" 00:02:04.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:04.783 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:04.783 + for nvme in "${!nvme_files[@]}" 00:02:04.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:05.050 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.050 + for nvme in "${!nvme_files[@]}" 00:02:05.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:02:05.050 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:05.050 + for nvme in "${!nvme_files[@]}" 00:02:05.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:05.050 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.050 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:05.050 + echo 'End stage prepare_nvme.sh' 00:02:05.050 End stage prepare_nvme.sh 00:02:05.064 [Pipeline] sh 00:02:05.344 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:05.344 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:05.344 00:02:05.344 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:05.344 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:05.344 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:05.344 HELP=0 00:02:05.344 DRY_RUN=0 00:02:05.344 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:02:05.344 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:05.344 NVME_AUTO_CREATE=0 00:02:05.344 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:02:05.344 NVME_CMB=,,,, 00:02:05.344 NVME_PMR=,,,, 00:02:05.344 NVME_ZNS=,,,, 00:02:05.344 NVME_MS=true,,,, 00:02:05.344 NVME_FDP=,,,on, 00:02:05.344 SPDK_VAGRANT_DISTRO=fedora39 00:02:05.344 SPDK_VAGRANT_VMCPU=10 00:02:05.344 SPDK_VAGRANT_VMRAM=12288 00:02:05.344 SPDK_VAGRANT_PROVIDER=libvirt 00:02:05.344 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:05.344 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:05.344 SPDK_OPENSTACK_NETWORK=0 00:02:05.344 VAGRANT_PACKAGE_BOX=0 00:02:05.344 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:05.344 FORCE_DISTRO=true 00:02:05.344 VAGRANT_BOX_VERSION= 00:02:05.344 EXTRA_VAGRANTFILES= 00:02:05.344 NIC_MODEL=virtio 00:02:05.344 00:02:05.344 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:05.344 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:07.890 Bringing machine 'default' up with 'libvirt' provider... 00:02:08.489 ==> default: Creating image (snapshot of base box volume). 00:02:08.489 ==> default: Creating domain with the following settings... 00:02:08.489 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732004610_7795f9415f6f4dcc3a5e 00:02:08.489 ==> default: -- Domain type: kvm 00:02:08.489 ==> default: -- Cpus: 10 00:02:08.489 ==> default: -- Feature: acpi 00:02:08.489 ==> default: -- Feature: apic 00:02:08.489 ==> default: -- Feature: pae 00:02:08.489 ==> default: -- Memory: 12288M 00:02:08.489 ==> default: -- Memory Backing: hugepages: 00:02:08.489 ==> default: -- Management MAC: 00:02:08.489 ==> default: -- Loader: 00:02:08.489 ==> default: -- Nvram: 00:02:08.489 ==> default: -- Base box: spdk/fedora39 00:02:08.489 ==> default: -- Storage pool: default 00:02:08.489 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732004610_7795f9415f6f4dcc3a5e.img (20G) 00:02:08.489 ==> default: -- Volume Cache: default 00:02:08.489 ==> default: -- Kernel: 00:02:08.489 ==> default: -- Initrd: 00:02:08.489 ==> default: -- Graphics Type: vnc 00:02:08.489 ==> default: -- Graphics Port: -1 00:02:08.489 ==> default: -- Graphics IP: 127.0.0.1 00:02:08.489 ==> default: -- Graphics Password: Not defined 00:02:08.489 ==> default: -- Video Type: cirrus 00:02:08.489 ==> default: -- Video VRAM: 9216 00:02:08.489 ==> default: -- Sound Type: 00:02:08.489 ==> default: -- Keymap: en-us 00:02:08.489 ==> default: -- TPM Path: 00:02:08.489 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:08.489 ==> default: -- Command line args: 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:08.489 ==> default: -> value=-drive, 00:02:08.489 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:08.489 ==> default: -> value=-drive, 00:02:08.489 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:08.489 ==> default: -> value=-drive, 00:02:08.489 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:08.489 ==> default: -> value=-drive, 00:02:08.489 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:08.489 ==> default: -> value=-drive, 00:02:08.489 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:08.489 ==> default: -> value=-drive, 00:02:08.489 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:08.489 ==> default: -> value=-device, 00:02:08.489 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:08.780 ==> default: Creating shared folders metadata... 00:02:08.780 ==> default: Starting domain. 00:02:10.177 ==> default: Waiting for domain to get an IP address... 00:02:28.372 ==> default: Waiting for SSH to become available... 00:02:28.372 ==> default: Configuring and enabling network interfaces... 00:02:33.644 default: SSH address: 192.168.121.28:22 00:02:33.644 default: SSH username: vagrant 00:02:33.644 default: SSH auth method: private key 00:02:36.177 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:44.296 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:49.588 ==> default: Mounting SSHFS shared folder... 00:02:52.116 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:52.116 ==> default: Checking Mount.. 00:02:53.491 ==> default: Folder Successfully Mounted! 00:02:53.491 ==> default: Running provisioner: file... 00:02:54.898 default: ~/.gitconfig => .gitconfig 00:02:55.187 00:02:55.187 SUCCESS! 00:02:55.187 00:02:55.187 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:55.187 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:55.187 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:55.187 00:02:55.196 [Pipeline] } 00:02:55.212 [Pipeline] // stage 00:02:55.222 [Pipeline] dir 00:02:55.223 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:55.225 [Pipeline] { 00:02:55.237 [Pipeline] catchError 00:02:55.239 [Pipeline] { 00:02:55.253 [Pipeline] sh 00:02:55.531 + vagrant ssh-config --host vagrant 00:02:55.531 + sed -ne /^Host/,$p 00:02:55.531 + tee ssh_conf 00:02:58.817 Host vagrant 00:02:58.817 HostName 192.168.121.28 00:02:58.817 User vagrant 00:02:58.817 Port 22 00:02:58.817 UserKnownHostsFile /dev/null 00:02:58.817 StrictHostKeyChecking no 00:02:58.817 PasswordAuthentication no 00:02:58.817 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:58.817 IdentitiesOnly yes 00:02:58.818 LogLevel FATAL 00:02:58.818 ForwardAgent yes 00:02:58.818 ForwardX11 yes 00:02:58.818 00:02:58.832 [Pipeline] withEnv 00:02:58.834 [Pipeline] { 00:02:58.848 [Pipeline] sh 00:02:59.131 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:59.131 source /etc/os-release 00:02:59.131 [[ -e /image.version ]] && img=$(< /image.version) 00:02:59.131 # Minimal, systemd-like check. 00:02:59.131 if [[ -e /.dockerenv ]]; then 00:02:59.131 # Clear garbage from the node's name: 00:02:59.131 # agt-er_autotest_547-896 -> autotest_547-896 00:02:59.131 # $HOSTNAME is the actual container id 00:02:59.131 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:59.131 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:59.131 # We can assume this is a mount from a host where container is running, 00:02:59.131 # so fetch its hostname to easily identify the target swarm worker. 00:02:59.131 container="$(< /etc/hostname) ($agent)" 00:02:59.131 else 00:02:59.131 # Fallback 00:02:59.131 container=$agent 00:02:59.131 fi 00:02:59.131 fi 00:02:59.131 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:59.131 00:02:59.404 [Pipeline] } 00:02:59.420 [Pipeline] // withEnv 00:02:59.429 [Pipeline] setCustomBuildProperty 00:02:59.445 [Pipeline] stage 00:02:59.448 [Pipeline] { (Tests) 00:02:59.465 [Pipeline] sh 00:02:59.748 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:00.021 [Pipeline] sh 00:03:00.303 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:00.575 [Pipeline] timeout 00:03:00.576 Timeout set to expire in 50 min 00:03:00.578 [Pipeline] { 00:03:00.593 [Pipeline] sh 00:03:00.871 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:01.438 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:03:01.450 [Pipeline] sh 00:03:01.728 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:01.998 [Pipeline] sh 00:03:02.275 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:02.550 [Pipeline] sh 00:03:02.838 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:03.107 ++ readlink -f spdk_repo 00:03:03.107 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:03.107 + [[ -n /home/vagrant/spdk_repo ]] 00:03:03.107 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:03.107 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:03.107 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:03.107 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:03.107 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:03.107 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:03.107 + cd /home/vagrant/spdk_repo 00:03:03.107 + source /etc/os-release 00:03:03.107 ++ NAME='Fedora Linux' 00:03:03.107 ++ VERSION='39 (Cloud Edition)' 00:03:03.107 ++ ID=fedora 00:03:03.107 ++ VERSION_ID=39 00:03:03.107 ++ VERSION_CODENAME= 00:03:03.107 ++ PLATFORM_ID=platform:f39 00:03:03.107 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:03.107 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:03.107 ++ LOGO=fedora-logo-icon 00:03:03.107 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:03.107 ++ HOME_URL=https://fedoraproject.org/ 00:03:03.107 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:03.107 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:03.107 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:03.107 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:03.107 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:03.107 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:03.107 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:03.107 ++ SUPPORT_END=2024-11-12 00:03:03.107 ++ VARIANT='Cloud Edition' 00:03:03.107 ++ VARIANT_ID=cloud 00:03:03.107 + uname -a 00:03:03.107 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:03.107 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:03.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:03.936 Hugepages 00:03:03.936 node hugesize free / total 00:03:03.936 node0 1048576kB 0 / 0 00:03:03.936 node0 2048kB 0 / 0 00:03:03.936 00:03:03.936 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.936 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:03.936 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:03.936 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:03.936 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:03.936 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:03.936 + rm -f /tmp/spdk-ld-path 00:03:03.936 + source autorun-spdk.conf 00:03:03.936 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.936 ++ SPDK_TEST_NVME=1 00:03:03.936 ++ SPDK_TEST_FTL=1 00:03:03.936 ++ SPDK_TEST_ISAL=1 00:03:03.936 ++ SPDK_RUN_ASAN=1 00:03:03.936 ++ SPDK_RUN_UBSAN=1 00:03:03.936 ++ SPDK_TEST_XNVME=1 00:03:03.936 ++ SPDK_TEST_NVME_FDP=1 00:03:03.936 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.936 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.936 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.936 ++ RUN_NIGHTLY=1 00:03:03.936 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:03.936 + [[ -n '' ]] 00:03:03.936 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:03.936 + for M in /var/spdk/build-*-manifest.txt 00:03:03.936 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:03.936 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.936 + for M in /var/spdk/build-*-manifest.txt 00:03:03.936 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:03.936 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.936 + for M in /var/spdk/build-*-manifest.txt 00:03:03.936 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:03.936 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.936 ++ uname 00:03:03.936 + [[ Linux == \L\i\n\u\x ]] 00:03:03.936 + sudo dmesg -T 00:03:04.196 + sudo dmesg --clear 00:03:04.196 + dmesg_pid=6204 00:03:04.196 + sudo dmesg -Tw 00:03:04.196 + [[ Fedora Linux == FreeBSD ]] 00:03:04.196 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:04.196 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:04.196 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:04.196 + [[ -x /usr/src/fio-static/fio ]] 00:03:04.196 + export FIO_BIN=/usr/src/fio-static/fio 00:03:04.196 + FIO_BIN=/usr/src/fio-static/fio 00:03:04.196 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:04.196 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:04.196 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:04.196 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:04.196 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:04.196 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:04.196 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:04.196 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:04.196 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:04.196 08:24:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:04.196 08:24:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:04.196 08:24:25 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:04.196 08:24:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:04.196 08:24:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:04.196 08:24:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:04.196 08:24:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:04.196 08:24:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:04.196 08:24:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:04.196 08:24:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.196 08:24:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.196 08:24:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.196 08:24:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.196 08:24:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.196 08:24:26 -- paths/export.sh@5 -- $ export PATH 00:03:04.196 08:24:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.196 08:24:26 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:04.196 08:24:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:04.196 08:24:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732004666.XXXXXX 00:03:04.196 08:24:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732004666.Akwj7h 00:03:04.196 08:24:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:04.196 08:24:26 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:03:04.196 08:24:26 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:04.196 08:24:26 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:04.196 08:24:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:04.196 08:24:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:04.196 08:24:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:04.196 08:24:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:04.196 08:24:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.197 08:24:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:04.197 08:24:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:04.197 08:24:26 -- pm/common@17 -- $ local monitor 00:03:04.197 08:24:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.197 08:24:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.197 08:24:26 -- pm/common@25 -- $ sleep 1 00:03:04.197 08:24:26 -- pm/common@21 -- $ date +%s 00:03:04.197 08:24:26 -- pm/common@21 -- $ date +%s 00:03:04.197 08:24:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732004666 00:03:04.197 08:24:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732004666 00:03:04.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732004666_collect-vmstat.pm.log 00:03:04.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732004666_collect-cpu-load.pm.log 00:03:05.395 08:24:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:05.395 08:24:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:05.395 08:24:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:05.395 08:24:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:05.395 08:24:27 -- spdk/autobuild.sh@16 -- $ date -u 00:03:05.395 Tue Nov 19 08:24:27 AM UTC 2024 00:03:05.395 08:24:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:05.395 v25.01-pre-190-gd47eb51c9 00:03:05.395 08:24:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:05.395 08:24:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:05.395 08:24:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.395 08:24:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.395 08:24:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.395 ************************************ 00:03:05.395 START TEST asan 00:03:05.395 ************************************ 00:03:05.395 using asan 00:03:05.395 08:24:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:05.395 00:03:05.395 real 0m0.001s 00:03:05.395 user 0m0.000s 00:03:05.395 sys 0m0.000s 00:03:05.395 08:24:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.395 08:24:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.395 ************************************ 00:03:05.395 END TEST asan 00:03:05.395 ************************************ 00:03:05.395 08:24:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:05.395 08:24:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:05.395 08:24:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:05.395 08:24:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.395 08:24:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.395 ************************************ 00:03:05.395 START TEST ubsan 00:03:05.395 ************************************ 00:03:05.395 using ubsan 00:03:05.395 08:24:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:05.395 00:03:05.395 real 0m0.000s 00:03:05.395 user 0m0.000s 00:03:05.395 sys 0m0.000s 00:03:05.395 08:24:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.395 08:24:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.395 ************************************ 00:03:05.395 END TEST ubsan 00:03:05.395 ************************************ 00:03:05.395 08:24:27 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:05.395 08:24:27 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:05.395 08:24:27 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:05.395 08:24:27 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:05.395 08:24:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:05.395 08:24:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.395 ************************************ 00:03:05.395 START TEST build_native_dpdk 00:03:05.395 ************************************ 00:03:05.395 08:24:27 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:05.395 caf0f5d395 version: 22.11.4 00:03:05.395 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:05.395 dc9c799c7d vhost: fix missing spinlock unlock 00:03:05.395 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:05.395 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:05.395 08:24:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:05.655 patching file config/rte_config.h 00:03:05.655 Hunk #1 succeeded at 60 (offset 1 line). 00:03:05.655 08:24:27 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:05.655 08:24:27 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:05.656 patching file lib/pcapng/rte_pcapng.c 00:03:05.656 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:05.656 08:24:27 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:05.656 08:24:27 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:12.296 The Meson build system 00:03:12.296 Version: 1.5.0 00:03:12.296 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:12.296 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:12.296 Build type: native build 00:03:12.296 Program cat found: YES (/usr/bin/cat) 00:03:12.296 Project name: DPDK 00:03:12.296 Project version: 22.11.4 00:03:12.296 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.296 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:12.296 Host machine cpu family: x86_64 00:03:12.296 Host machine cpu: x86_64 00:03:12.296 Message: ## Building in Developer Mode ## 00:03:12.296 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:12.296 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:12.296 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:12.296 Program objdump found: YES (/usr/bin/objdump) 00:03:12.296 Program python3 found: YES (/usr/bin/python3) 00:03:12.296 Program cat found: YES (/usr/bin/cat) 00:03:12.296 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:12.296 Checking for size of "void *" : 8 00:03:12.296 Checking for size of "void *" : 8 (cached) 00:03:12.296 Library m found: YES 00:03:12.296 Library numa found: YES 00:03:12.296 Has header "numaif.h" : YES 00:03:12.296 Library fdt found: NO 00:03:12.296 Library execinfo found: NO 00:03:12.296 Has header "execinfo.h" : YES 00:03:12.296 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.296 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:12.296 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:12.296 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:12.296 Run-time dependency openssl found: YES 3.1.1 00:03:12.296 Run-time dependency libpcap found: YES 1.10.4 00:03:12.296 Has header "pcap.h" with dependency libpcap: YES 00:03:12.296 Compiler for C supports arguments -Wcast-qual: YES 00:03:12.296 Compiler for C supports arguments -Wdeprecated: YES 00:03:12.296 Compiler for C supports arguments -Wformat: YES 00:03:12.296 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:12.296 Compiler for C supports arguments -Wformat-security: NO 00:03:12.296 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.296 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:12.296 Compiler for C supports arguments -Wnested-externs: YES 00:03:12.296 Compiler for C supports arguments -Wold-style-definition: YES 00:03:12.296 Compiler for C supports arguments -Wpointer-arith: YES 00:03:12.296 Compiler for C supports arguments -Wsign-compare: YES 00:03:12.296 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:12.296 Compiler for C supports arguments -Wundef: YES 00:03:12.296 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.296 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:12.296 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:12.296 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.296 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:12.296 Compiler for C supports arguments -mavx512f: YES 00:03:12.296 Checking if "AVX512 checking" compiles: YES 00:03:12.296 Fetching value of define "__SSE4_2__" : 1 00:03:12.296 Fetching value of define "__AES__" : 1 00:03:12.296 Fetching value of define "__AVX__" : 1 00:03:12.296 Fetching value of define "__AVX2__" : 1 00:03:12.296 Fetching value of define "__AVX512BW__" : 1 00:03:12.296 Fetching value of define "__AVX512CD__" : 1 00:03:12.296 Fetching value of define "__AVX512DQ__" : 1 00:03:12.296 Fetching value of define "__AVX512F__" : 1 00:03:12.296 Fetching value of define "__AVX512VL__" : 1 00:03:12.296 Fetching value of define "__PCLMUL__" : 1 00:03:12.296 Fetching value of define "__RDRND__" : 1 00:03:12.296 Fetching value of define "__RDSEED__" : 1 00:03:12.296 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:12.296 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:12.296 Message: lib/kvargs: Defining dependency "kvargs" 00:03:12.296 Message: lib/telemetry: Defining dependency "telemetry" 00:03:12.296 Checking for function "getentropy" : YES 00:03:12.296 Message: lib/eal: Defining dependency "eal" 00:03:12.296 Message: lib/ring: Defining dependency "ring" 00:03:12.296 Message: lib/rcu: Defining dependency "rcu" 00:03:12.296 Message: lib/mempool: Defining dependency "mempool" 00:03:12.296 Message: lib/mbuf: Defining dependency "mbuf" 00:03:12.296 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:12.296 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:12.296 Compiler for C supports arguments -mpclmul: YES 00:03:12.296 Compiler for C supports arguments -maes: YES 00:03:12.296 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:12.296 Compiler for C supports arguments -mavx512bw: YES 00:03:12.296 Compiler for C supports arguments -mavx512dq: YES 00:03:12.296 Compiler for C supports arguments -mavx512vl: YES 00:03:12.296 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:12.296 Compiler for C supports arguments -mavx2: YES 00:03:12.296 Compiler for C supports arguments -mavx: YES 00:03:12.296 Message: lib/net: Defining dependency "net" 00:03:12.296 Message: lib/meter: Defining dependency "meter" 00:03:12.296 Message: lib/ethdev: Defining dependency "ethdev" 00:03:12.296 Message: lib/pci: Defining dependency "pci" 00:03:12.296 Message: lib/cmdline: Defining dependency "cmdline" 00:03:12.296 Message: lib/metrics: Defining dependency "metrics" 00:03:12.296 Message: lib/hash: Defining dependency "hash" 00:03:12.296 Message: lib/timer: Defining dependency "timer" 00:03:12.296 Fetching value of define "__AVX2__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:12.296 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:12.296 Message: lib/acl: Defining dependency "acl" 00:03:12.296 Message: lib/bbdev: Defining dependency "bbdev" 00:03:12.296 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:12.296 Run-time dependency libelf found: YES 0.191 00:03:12.297 Message: lib/bpf: Defining dependency "bpf" 00:03:12.297 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:12.297 Message: lib/compressdev: Defining dependency "compressdev" 00:03:12.297 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:12.297 Message: lib/distributor: Defining dependency "distributor" 00:03:12.297 Message: lib/efd: Defining dependency "efd" 00:03:12.297 Message: lib/eventdev: Defining dependency "eventdev" 00:03:12.297 Message: lib/gpudev: Defining dependency "gpudev" 00:03:12.297 Message: lib/gro: Defining dependency "gro" 00:03:12.297 Message: lib/gso: Defining dependency "gso" 00:03:12.297 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:12.297 Message: lib/jobstats: Defining dependency "jobstats" 00:03:12.297 Message: lib/latencystats: Defining dependency "latencystats" 00:03:12.297 Message: lib/lpm: Defining dependency "lpm" 00:03:12.297 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.297 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:12.297 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:12.297 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:12.297 Message: lib/member: Defining dependency "member" 00:03:12.297 Message: lib/pcapng: Defining dependency "pcapng" 00:03:12.297 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:12.297 Message: lib/power: Defining dependency "power" 00:03:12.297 Message: lib/rawdev: Defining dependency "rawdev" 00:03:12.297 Message: lib/regexdev: Defining dependency "regexdev" 00:03:12.297 Message: lib/dmadev: Defining dependency "dmadev" 00:03:12.297 Message: lib/rib: Defining dependency "rib" 00:03:12.297 Message: lib/reorder: Defining dependency "reorder" 00:03:12.297 Message: lib/sched: Defining dependency "sched" 00:03:12.297 Message: lib/security: Defining dependency "security" 00:03:12.297 Message: lib/stack: Defining dependency "stack" 00:03:12.297 Has header "linux/userfaultfd.h" : YES 00:03:12.297 Message: lib/vhost: Defining dependency "vhost" 00:03:12.297 Message: lib/ipsec: Defining dependency "ipsec" 00:03:12.297 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.297 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:12.297 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:12.297 Message: lib/fib: Defining dependency "fib" 00:03:12.297 Message: lib/port: Defining dependency "port" 00:03:12.297 Message: lib/pdump: Defining dependency "pdump" 00:03:12.297 Message: lib/table: Defining dependency "table" 00:03:12.297 Message: lib/pipeline: Defining dependency "pipeline" 00:03:12.297 Message: lib/graph: Defining dependency "graph" 00:03:12.297 Message: lib/node: Defining dependency "node" 00:03:12.297 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:12.297 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:12.297 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:12.297 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:12.297 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:12.297 Compiler for C supports arguments -Wno-unused-value: YES 00:03:12.297 Compiler for C supports arguments -Wno-format: YES 00:03:12.297 Compiler for C supports arguments -Wno-format-security: YES 00:03:12.297 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:12.297 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:13.236 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:13.236 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:13.236 Fetching value of define "__AVX2__" : 1 (cached) 00:03:13.236 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:13.236 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:13.236 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:13.236 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:13.236 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:13.236 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:13.236 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:13.236 Configuring doxy-api.conf using configuration 00:03:13.236 Program sphinx-build found: NO 00:03:13.236 Configuring rte_build_config.h using configuration 00:03:13.236 Message: 00:03:13.236 ================= 00:03:13.236 Applications Enabled 00:03:13.236 ================= 00:03:13.236 00:03:13.236 apps: 00:03:13.236 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:13.236 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:13.236 test-security-perf, 00:03:13.236 00:03:13.236 Message: 00:03:13.236 ================= 00:03:13.236 Libraries Enabled 00:03:13.236 ================= 00:03:13.236 00:03:13.236 libs: 00:03:13.236 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:13.236 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:13.236 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:13.236 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:13.236 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:13.236 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:13.236 table, pipeline, graph, node, 00:03:13.236 00:03:13.236 Message: 00:03:13.236 =============== 00:03:13.236 Drivers Enabled 00:03:13.236 =============== 00:03:13.236 00:03:13.236 common: 00:03:13.236 00:03:13.236 bus: 00:03:13.236 pci, vdev, 00:03:13.236 mempool: 00:03:13.236 ring, 00:03:13.236 dma: 00:03:13.236 00:03:13.236 net: 00:03:13.236 i40e, 00:03:13.236 raw: 00:03:13.236 00:03:13.236 crypto: 00:03:13.236 00:03:13.236 compress: 00:03:13.236 00:03:13.236 regex: 00:03:13.236 00:03:13.236 vdpa: 00:03:13.236 00:03:13.236 event: 00:03:13.236 00:03:13.236 baseband: 00:03:13.236 00:03:13.236 gpu: 00:03:13.236 00:03:13.236 00:03:13.236 Message: 00:03:13.236 ================= 00:03:13.236 Content Skipped 00:03:13.236 ================= 00:03:13.236 00:03:13.236 apps: 00:03:13.236 00:03:13.236 libs: 00:03:13.236 kni: explicitly disabled via build config (deprecated lib) 00:03:13.236 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:13.236 00:03:13.236 drivers: 00:03:13.236 common/cpt: not in enabled drivers build config 00:03:13.236 common/dpaax: not in enabled drivers build config 00:03:13.236 common/iavf: not in enabled drivers build config 00:03:13.236 common/idpf: not in enabled drivers build config 00:03:13.236 common/mvep: not in enabled drivers build config 00:03:13.236 common/octeontx: not in enabled drivers build config 00:03:13.236 bus/auxiliary: not in enabled drivers build config 00:03:13.236 bus/dpaa: not in enabled drivers build config 00:03:13.236 bus/fslmc: not in enabled drivers build config 00:03:13.236 bus/ifpga: not in enabled drivers build config 00:03:13.236 bus/vmbus: not in enabled drivers build config 00:03:13.236 common/cnxk: not in enabled drivers build config 00:03:13.236 common/mlx5: not in enabled drivers build config 00:03:13.236 common/qat: not in enabled drivers build config 00:03:13.236 common/sfc_efx: not in enabled drivers build config 00:03:13.236 mempool/bucket: not in enabled drivers build config 00:03:13.236 mempool/cnxk: not in enabled drivers build config 00:03:13.236 mempool/dpaa: not in enabled drivers build config 00:03:13.236 mempool/dpaa2: not in enabled drivers build config 00:03:13.236 mempool/octeontx: not in enabled drivers build config 00:03:13.236 mempool/stack: not in enabled drivers build config 00:03:13.236 dma/cnxk: not in enabled drivers build config 00:03:13.236 dma/dpaa: not in enabled drivers build config 00:03:13.236 dma/dpaa2: not in enabled drivers build config 00:03:13.236 dma/hisilicon: not in enabled drivers build config 00:03:13.236 dma/idxd: not in enabled drivers build config 00:03:13.236 dma/ioat: not in enabled drivers build config 00:03:13.236 dma/skeleton: not in enabled drivers build config 00:03:13.236 net/af_packet: not in enabled drivers build config 00:03:13.236 net/af_xdp: not in enabled drivers build config 00:03:13.236 net/ark: not in enabled drivers build config 00:03:13.236 net/atlantic: not in enabled drivers build config 00:03:13.236 net/avp: not in enabled drivers build config 00:03:13.236 net/axgbe: not in enabled drivers build config 00:03:13.236 net/bnx2x: not in enabled drivers build config 00:03:13.236 net/bnxt: not in enabled drivers build config 00:03:13.236 net/bonding: not in enabled drivers build config 00:03:13.236 net/cnxk: not in enabled drivers build config 00:03:13.236 net/cxgbe: not in enabled drivers build config 00:03:13.236 net/dpaa: not in enabled drivers build config 00:03:13.236 net/dpaa2: not in enabled drivers build config 00:03:13.236 net/e1000: not in enabled drivers build config 00:03:13.236 net/ena: not in enabled drivers build config 00:03:13.236 net/enetc: not in enabled drivers build config 00:03:13.236 net/enetfec: not in enabled drivers build config 00:03:13.236 net/enic: not in enabled drivers build config 00:03:13.236 net/failsafe: not in enabled drivers build config 00:03:13.236 net/fm10k: not in enabled drivers build config 00:03:13.236 net/gve: not in enabled drivers build config 00:03:13.236 net/hinic: not in enabled drivers build config 00:03:13.236 net/hns3: not in enabled drivers build config 00:03:13.236 net/iavf: not in enabled drivers build config 00:03:13.236 net/ice: not in enabled drivers build config 00:03:13.236 net/idpf: not in enabled drivers build config 00:03:13.236 net/igc: not in enabled drivers build config 00:03:13.236 net/ionic: not in enabled drivers build config 00:03:13.236 net/ipn3ke: not in enabled drivers build config 00:03:13.236 net/ixgbe: not in enabled drivers build config 00:03:13.236 net/kni: not in enabled drivers build config 00:03:13.236 net/liquidio: not in enabled drivers build config 00:03:13.236 net/mana: not in enabled drivers build config 00:03:13.236 net/memif: not in enabled drivers build config 00:03:13.236 net/mlx4: not in enabled drivers build config 00:03:13.236 net/mlx5: not in enabled drivers build config 00:03:13.236 net/mvneta: not in enabled drivers build config 00:03:13.236 net/mvpp2: not in enabled drivers build config 00:03:13.236 net/netvsc: not in enabled drivers build config 00:03:13.236 net/nfb: not in enabled drivers build config 00:03:13.236 net/nfp: not in enabled drivers build config 00:03:13.236 net/ngbe: not in enabled drivers build config 00:03:13.236 net/null: not in enabled drivers build config 00:03:13.236 net/octeontx: not in enabled drivers build config 00:03:13.236 net/octeon_ep: not in enabled drivers build config 00:03:13.236 net/pcap: not in enabled drivers build config 00:03:13.236 net/pfe: not in enabled drivers build config 00:03:13.236 net/qede: not in enabled drivers build config 00:03:13.236 net/ring: not in enabled drivers build config 00:03:13.236 net/sfc: not in enabled drivers build config 00:03:13.236 net/softnic: not in enabled drivers build config 00:03:13.236 net/tap: not in enabled drivers build config 00:03:13.236 net/thunderx: not in enabled drivers build config 00:03:13.236 net/txgbe: not in enabled drivers build config 00:03:13.236 net/vdev_netvsc: not in enabled drivers build config 00:03:13.236 net/vhost: not in enabled drivers build config 00:03:13.236 net/virtio: not in enabled drivers build config 00:03:13.236 net/vmxnet3: not in enabled drivers build config 00:03:13.236 raw/cnxk_bphy: not in enabled drivers build config 00:03:13.236 raw/cnxk_gpio: not in enabled drivers build config 00:03:13.236 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:13.236 raw/ifpga: not in enabled drivers build config 00:03:13.236 raw/ntb: not in enabled drivers build config 00:03:13.236 raw/skeleton: not in enabled drivers build config 00:03:13.236 crypto/armv8: not in enabled drivers build config 00:03:13.236 crypto/bcmfs: not in enabled drivers build config 00:03:13.236 crypto/caam_jr: not in enabled drivers build config 00:03:13.236 crypto/ccp: not in enabled drivers build config 00:03:13.236 crypto/cnxk: not in enabled drivers build config 00:03:13.236 crypto/dpaa_sec: not in enabled drivers build config 00:03:13.236 crypto/dpaa2_sec: not in enabled drivers build config 00:03:13.236 crypto/ipsec_mb: not in enabled drivers build config 00:03:13.236 crypto/mlx5: not in enabled drivers build config 00:03:13.236 crypto/mvsam: not in enabled drivers build config 00:03:13.236 crypto/nitrox: not in enabled drivers build config 00:03:13.236 crypto/null: not in enabled drivers build config 00:03:13.236 crypto/octeontx: not in enabled drivers build config 00:03:13.236 crypto/openssl: not in enabled drivers build config 00:03:13.236 crypto/scheduler: not in enabled drivers build config 00:03:13.236 crypto/uadk: not in enabled drivers build config 00:03:13.236 crypto/virtio: not in enabled drivers build config 00:03:13.236 compress/isal: not in enabled drivers build config 00:03:13.236 compress/mlx5: not in enabled drivers build config 00:03:13.236 compress/octeontx: not in enabled drivers build config 00:03:13.236 compress/zlib: not in enabled drivers build config 00:03:13.236 regex/mlx5: not in enabled drivers build config 00:03:13.236 regex/cn9k: not in enabled drivers build config 00:03:13.236 vdpa/ifc: not in enabled drivers build config 00:03:13.236 vdpa/mlx5: not in enabled drivers build config 00:03:13.236 vdpa/sfc: not in enabled drivers build config 00:03:13.236 event/cnxk: not in enabled drivers build config 00:03:13.236 event/dlb2: not in enabled drivers build config 00:03:13.236 event/dpaa: not in enabled drivers build config 00:03:13.236 event/dpaa2: not in enabled drivers build config 00:03:13.236 event/dsw: not in enabled drivers build config 00:03:13.236 event/opdl: not in enabled drivers build config 00:03:13.236 event/skeleton: not in enabled drivers build config 00:03:13.236 event/sw: not in enabled drivers build config 00:03:13.236 event/octeontx: not in enabled drivers build config 00:03:13.236 baseband/acc: not in enabled drivers build config 00:03:13.236 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:13.237 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:13.237 baseband/la12xx: not in enabled drivers build config 00:03:13.237 baseband/null: not in enabled drivers build config 00:03:13.237 baseband/turbo_sw: not in enabled drivers build config 00:03:13.237 gpu/cuda: not in enabled drivers build config 00:03:13.237 00:03:13.237 00:03:13.237 Build targets in project: 311 00:03:13.237 00:03:13.237 DPDK 22.11.4 00:03:13.237 00:03:13.237 User defined options 00:03:13.237 libdir : lib 00:03:13.237 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:13.237 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:13.237 c_link_args : 00:03:13.237 enable_docs : false 00:03:13.237 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:13.237 enable_kmods : false 00:03:13.237 machine : native 00:03:13.237 tests : false 00:03:13.237 00:03:13.237 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.237 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:13.237 08:24:35 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:13.496 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:13.496 [1/740] Generating lib/rte_kvargs_def with a custom command 00:03:13.496 [2/740] Generating lib/rte_telemetry_def with a custom command 00:03:13.496 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:03:13.496 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:03:13.496 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:13.496 [6/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:13.496 [7/740] Linking static target lib/librte_kvargs.a 00:03:13.496 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:13.496 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:13.496 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:13.496 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:13.755 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:13.755 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.755 [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:13.755 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:13.755 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:13.755 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:13.755 [18/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.755 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:13.755 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:13.755 [21/740] Linking target lib/librte_kvargs.so.23.0 00:03:13.755 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:14.016 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:14.016 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:14.016 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:14.016 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:14.016 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:14.016 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:14.016 [29/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:14.016 [30/740] Linking static target lib/librte_telemetry.a 00:03:14.016 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:14.016 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:14.016 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:14.016 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:14.275 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:14.275 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.275 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.275 [38/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:14.275 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:14.275 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:14.275 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.275 [42/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.534 [43/740] Linking target lib/librte_telemetry.so.23.0 00:03:14.534 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:14.534 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:14.534 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:14.534 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:14.534 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.534 [49/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:14.534 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:14.534 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:14.534 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:14.534 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:14.795 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.795 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:14.795 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:14.795 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:14.795 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:14.795 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:14.795 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:14.795 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:14.795 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:14.795 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:14.795 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:14.795 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:14.795 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:14.795 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:14.795 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:14.795 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:14.795 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:14.795 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:15.055 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:15.055 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:15.055 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:15.055 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:15.055 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:15.055 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:15.055 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:15.055 [79/740] Generating lib/rte_eal_def with a custom command 00:03:15.055 [80/740] Generating lib/rte_eal_mingw with a custom command 00:03:15.055 [81/740] Generating lib/rte_ring_def with a custom command 00:03:15.055 [82/740] Generating lib/rte_ring_mingw with a custom command 00:03:15.055 [83/740] Generating lib/rte_rcu_def with a custom command 00:03:15.055 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:03:15.055 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:15.055 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:15.055 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:15.055 [88/740] Linking static target lib/librte_ring.a 00:03:15.314 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:15.314 [90/740] Generating lib/rte_mempool_def with a custom command 00:03:15.314 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:03:15.314 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:15.314 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:15.314 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.573 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:15.573 [96/740] Generating lib/rte_mbuf_def with a custom command 00:03:15.573 [97/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:15.573 [98/740] Generating lib/rte_mbuf_mingw with a custom command 00:03:15.573 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:15.573 [100/740] Linking static target lib/librte_eal.a 00:03:15.573 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:15.833 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:15.833 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:15.833 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:15.833 [105/740] Linking static target lib/librte_rcu.a 00:03:15.833 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:15.833 [107/740] Linking static target lib/librte_mempool.a 00:03:16.093 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:16.093 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:16.093 [110/740] Generating lib/rte_net_def with a custom command 00:03:16.093 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:16.093 [112/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:16.093 [113/740] Generating lib/rte_net_mingw with a custom command 00:03:16.093 [114/740] Generating lib/rte_meter_def with a custom command 00:03:16.093 [115/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:16.093 [116/740] Generating lib/rte_meter_mingw with a custom command 00:03:16.093 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.093 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:16.093 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:16.093 [120/740] Linking static target lib/librte_meter.a 00:03:16.353 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:16.353 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:16.353 [123/740] Linking static target lib/librte_net.a 00:03:16.353 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.613 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:16.613 [126/740] Linking static target lib/librte_mbuf.a 00:03:16.613 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:16.613 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:16.613 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.613 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:16.872 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:16.872 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:16.872 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.131 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:17.131 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.131 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:17.390 [137/740] Generating lib/rte_ethdev_def with a custom command 00:03:17.390 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:03:17.390 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.390 [140/740] Generating lib/rte_pci_def with a custom command 00:03:17.390 [141/740] Generating lib/rte_pci_mingw with a custom command 00:03:17.390 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:17.390 [143/740] Linking static target lib/librte_pci.a 00:03:17.390 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:17.390 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:17.390 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.390 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:17.648 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.648 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:17.648 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:17.648 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:17.648 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:17.648 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:17.648 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:17.648 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:17.648 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:17.648 [157/740] Generating lib/rte_cmdline_def with a custom command 00:03:17.907 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:03:17.907 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.907 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.907 [161/740] Generating lib/rte_metrics_def with a custom command 00:03:17.907 [162/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.907 [163/740] Generating lib/rte_metrics_mingw with a custom command 00:03:17.907 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.907 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:17.907 [166/740] Linking static target lib/librte_cmdline.a 00:03:17.907 [167/740] Generating lib/rte_hash_def with a custom command 00:03:17.907 [168/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:17.907 [169/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.907 [170/740] Generating lib/rte_hash_mingw with a custom command 00:03:17.907 [171/740] Generating lib/rte_timer_def with a custom command 00:03:17.907 [172/740] Generating lib/rte_timer_mingw with a custom command 00:03:18.167 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:18.167 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:18.167 [175/740] Linking static target lib/librte_metrics.a 00:03:18.426 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:18.426 [177/740] Linking static target lib/librte_timer.a 00:03:18.685 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.685 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:18.685 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:18.685 [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:18.945 [182/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.946 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.946 [184/740] Generating lib/rte_acl_def with a custom command 00:03:18.946 [185/740] Generating lib/rte_acl_mingw with a custom command 00:03:18.946 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:19.205 [187/740] Generating lib/rte_bbdev_def with a custom command 00:03:19.205 [188/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:19.205 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:03:19.205 [190/740] Generating lib/rte_bitratestats_def with a custom command 00:03:19.205 [191/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:19.205 [192/740] Linking static target lib/librte_ethdev.a 00:03:19.205 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:03:19.464 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:19.464 [195/740] Linking static target lib/librte_bitratestats.a 00:03:19.724 [196/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:19.724 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:19.724 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:19.724 [199/740] Linking static target lib/librte_bbdev.a 00:03:19.724 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.982 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:20.242 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:20.242 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:20.242 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.503 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:20.503 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:20.503 [207/740] Linking static target lib/librte_hash.a 00:03:20.503 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:21.208 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:21.208 [210/740] Generating lib/rte_bpf_def with a custom command 00:03:21.208 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:03:21.208 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:21.208 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:03:21.208 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:03:21.208 [215/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:21.208 [216/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:21.208 [217/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:21.208 [218/740] Linking static target lib/librte_cfgfile.a 00:03:21.468 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.468 [220/740] Generating lib/rte_compressdev_def with a custom command 00:03:21.468 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:03:21.468 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:21.468 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:21.468 [224/740] Linking static target lib/librte_bpf.a 00:03:21.726 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.727 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:21.727 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:03:21.727 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:03:21.727 [229/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:21.727 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:21.727 [231/740] Linking static target lib/librte_acl.a 00:03:21.985 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:21.985 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:21.985 [234/740] Linking static target lib/librte_compressdev.a 00:03:21.985 [235/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.985 [236/740] Generating lib/rte_distributor_def with a custom command 00:03:21.985 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:03:21.985 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.243 [239/740] Generating lib/rte_efd_def with a custom command 00:03:22.243 [240/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:22.243 [241/740] Generating lib/rte_efd_mingw with a custom command 00:03:22.243 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:22.243 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:22.502 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:22.502 [245/740] Linking static target lib/librte_distributor.a 00:03:22.502 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:22.761 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:22.761 [248/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.761 [249/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.761 [250/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.761 [251/740] Linking target lib/librte_eal.so.23.0 00:03:23.020 [252/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:23.020 [253/740] Linking target lib/librte_ring.so.23.0 00:03:23.020 [254/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:23.020 [255/740] Linking target lib/librte_meter.so.23.0 00:03:23.278 [256/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:23.278 [257/740] Linking target lib/librte_rcu.so.23.0 00:03:23.278 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:23.278 [259/740] Linking target lib/librte_mempool.so.23.0 00:03:23.278 [260/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:23.278 [261/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:23.278 [262/740] Linking target lib/librte_pci.so.23.0 00:03:23.279 [263/740] Linking target lib/librte_timer.so.23.0 00:03:23.537 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:23.537 [265/740] Linking target lib/librte_mbuf.so.23.0 00:03:23.537 [266/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:23.537 [267/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:23.537 [268/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:23.537 [269/740] Linking target lib/librte_acl.so.23.0 00:03:23.537 [270/740] Linking static target lib/librte_efd.a 00:03:23.537 [271/740] Linking target lib/librte_cfgfile.so.23.0 00:03:23.537 [272/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:23.537 [273/740] Generating lib/rte_eventdev_def with a custom command 00:03:23.537 [274/740] Linking target lib/librte_net.so.23.0 00:03:23.537 [275/740] Linking target lib/librte_bbdev.so.23.0 00:03:23.537 [276/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:23.795 [277/740] Linking target lib/librte_compressdev.so.23.0 00:03:23.795 [278/740] Linking target lib/librte_distributor.so.23.0 00:03:23.795 [279/740] Generating lib/rte_eventdev_mingw with a custom command 00:03:23.795 [280/740] Generating lib/rte_gpudev_def with a custom command 00:03:23.795 [281/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:23.795 [282/740] Generating lib/rte_gpudev_mingw with a custom command 00:03:23.795 [283/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.795 [284/740] Linking target lib/librte_cmdline.so.23.0 00:03:23.796 [285/740] Linking target lib/librte_hash.so.23.0 00:03:23.796 [286/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:23.796 [287/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:23.796 [288/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:23.796 [289/740] Linking static target lib/librte_cryptodev.a 00:03:24.055 [290/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.055 [291/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:24.055 [292/740] Linking target lib/librte_efd.so.23.0 00:03:24.055 [293/740] Linking target lib/librte_ethdev.so.23.0 00:03:24.055 [294/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:24.055 [295/740] Generating lib/rte_gro_def with a custom command 00:03:24.055 [296/740] Generating lib/rte_gro_mingw with a custom command 00:03:24.055 [297/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:24.313 [298/740] Linking target lib/librte_metrics.so.23.0 00:03:24.313 [299/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:24.313 [300/740] Linking target lib/librte_bitratestats.so.23.0 00:03:24.313 [301/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:24.313 [302/740] Linking static target lib/librte_gpudev.a 00:03:24.313 [303/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:24.313 [304/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:24.313 [305/740] Linking target lib/librte_bpf.so.23.0 00:03:24.572 [306/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:24.572 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:24.572 [308/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:24.831 [309/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:24.831 [310/740] Linking static target lib/librte_gro.a 00:03:24.831 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:24.831 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:24.831 [313/740] Generating lib/rte_gso_def with a custom command 00:03:24.831 [314/740] Generating lib/rte_gso_mingw with a custom command 00:03:24.831 [315/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:24.831 [316/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.094 [317/740] Linking static target lib/librte_eventdev.a 00:03:25.095 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:25.095 [319/740] Linking target lib/librte_gro.so.23.0 00:03:25.095 [320/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:25.095 [321/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:25.095 [322/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:25.095 [323/740] Linking static target lib/librte_gso.a 00:03:25.095 [324/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.352 [325/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.352 [326/740] Linking target lib/librte_gpudev.so.23.0 00:03:25.352 [327/740] Linking target lib/librte_gso.so.23.0 00:03:25.352 [328/740] Generating lib/rte_ip_frag_mingw with a custom command 00:03:25.352 [329/740] Generating lib/rte_ip_frag_def with a custom command 00:03:25.352 [330/740] Generating lib/rte_jobstats_def with a custom command 00:03:25.352 [331/740] Generating lib/rte_jobstats_mingw with a custom command 00:03:25.352 [332/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:25.352 [333/740] Generating lib/rte_latencystats_def with a custom command 00:03:25.352 [334/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:25.352 [335/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:25.352 [336/740] Linking static target lib/librte_jobstats.a 00:03:25.352 [337/740] Generating lib/rte_latencystats_mingw with a custom command 00:03:25.611 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:25.611 [339/740] Generating lib/rte_lpm_def with a custom command 00:03:25.611 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:03:25.611 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:25.611 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:25.611 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:25.611 [344/740] Linking static target lib/librte_ip_frag.a 00:03:25.871 [345/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.871 [346/740] Linking target lib/librte_jobstats.so.23.0 00:03:25.871 [347/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.871 [348/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:25.871 [349/740] Linking static target lib/librte_latencystats.a 00:03:25.871 [350/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.131 [351/740] Linking target lib/librte_ip_frag.so.23.0 00:03:26.131 [352/740] Linking target lib/librte_cryptodev.so.23.0 00:03:26.132 [353/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:26.132 [354/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:26.132 [355/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:26.132 [356/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:26.132 [357/740] Generating lib/rte_member_def with a custom command 00:03:26.132 [358/740] Generating lib/rte_member_mingw with a custom command 00:03:26.132 [359/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:26.132 [360/740] Generating lib/rte_pcapng_def with a custom command 00:03:26.132 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:03:26.132 [362/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:26.132 [363/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.132 [364/740] Linking target lib/librte_latencystats.so.23.0 00:03:26.418 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:26.418 [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:26.418 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:26.418 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:26.418 [369/740] Linking static target lib/librte_lpm.a 00:03:26.418 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:26.677 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:26.677 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:26.677 [373/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:26.677 [374/740] Generating lib/rte_power_def with a custom command 00:03:26.677 [375/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:26.677 [376/740] Generating lib/rte_power_mingw with a custom command 00:03:26.677 [377/740] Generating lib/rte_rawdev_def with a custom command 00:03:26.677 [378/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.937 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:03:26.937 [380/740] Linking target lib/librte_lpm.so.23.0 00:03:26.937 [381/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:26.937 [382/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:26.937 [383/740] Linking static target lib/librte_pcapng.a 00:03:26.937 [384/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.937 [385/740] Generating lib/rte_regexdev_def with a custom command 00:03:26.937 [386/740] Generating lib/rte_regexdev_mingw with a custom command 00:03:26.937 [387/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:26.937 [388/740] Linking target lib/librte_eventdev.so.23.0 00:03:26.937 [389/740] Generating lib/rte_dmadev_def with a custom command 00:03:26.937 [390/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:26.937 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:03:26.937 [392/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:27.196 [393/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:27.196 [394/740] Generating lib/rte_rib_def with a custom command 00:03:27.196 [395/740] Generating lib/rte_rib_mingw with a custom command 00:03:27.196 [396/740] Generating lib/rte_reorder_def with a custom command 00:03:27.196 [397/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:27.196 [398/740] Linking static target lib/librte_rawdev.a 00:03:27.196 [399/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.196 [400/740] Generating lib/rte_reorder_mingw with a custom command 00:03:27.196 [401/740] Linking target lib/librte_pcapng.so.23.0 00:03:27.196 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:27.196 [403/740] Linking static target lib/librte_power.a 00:03:27.196 [404/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:27.196 [405/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:27.196 [406/740] Linking static target lib/librte_dmadev.a 00:03:27.460 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:27.460 [408/740] Linking static target lib/librte_regexdev.a 00:03:27.460 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:27.460 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.460 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:27.460 [412/740] Linking target lib/librte_rawdev.so.23.0 00:03:27.728 [413/740] Generating lib/rte_sched_def with a custom command 00:03:27.728 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:27.728 [415/740] Generating lib/rte_sched_mingw with a custom command 00:03:27.728 [416/740] Generating lib/rte_security_def with a custom command 00:03:27.728 [417/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:27.728 [418/740] Linking static target lib/librte_reorder.a 00:03:27.728 [419/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:27.728 [420/740] Generating lib/rte_security_mingw with a custom command 00:03:27.728 [421/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:27.728 [422/740] Linking static target lib/librte_member.a 00:03:27.728 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:27.728 [424/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.728 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:27.728 [426/740] Generating lib/rte_stack_def with a custom command 00:03:27.728 [427/740] Linking target lib/librte_dmadev.so.23.0 00:03:27.988 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:27.988 [429/740] Linking static target lib/librte_stack.a 00:03:27.988 [430/740] Generating lib/rte_stack_mingw with a custom command 00:03:27.988 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:27.988 [432/740] Linking static target lib/librte_rib.a 00:03:27.988 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.988 [434/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:27.988 [435/740] Linking target lib/librte_reorder.so.23.0 00:03:27.988 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:27.988 [437/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.988 [438/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.988 [439/740] Linking target lib/librte_regexdev.so.23.0 00:03:28.248 [440/740] Linking target lib/librte_stack.so.23.0 00:03:28.248 [441/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.248 [442/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.248 [443/740] Linking target lib/librte_member.so.23.0 00:03:28.248 [444/740] Linking target lib/librte_power.so.23.0 00:03:28.248 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:28.248 [446/740] Linking static target lib/librte_security.a 00:03:28.248 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.506 [448/740] Linking target lib/librte_rib.so.23.0 00:03:28.506 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:28.506 [450/740] Generating lib/rte_vhost_def with a custom command 00:03:28.506 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:03:28.766 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:28.766 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:28.766 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.766 [455/740] Linking target lib/librte_security.so.23.0 00:03:28.766 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:28.766 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:28.766 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:28.766 [459/740] Linking static target lib/librte_sched.a 00:03:29.066 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:29.325 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:29.325 [462/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.325 [463/740] Generating lib/rte_ipsec_def with a custom command 00:03:29.325 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:03:29.325 [465/740] Linking target lib/librte_sched.so.23.0 00:03:29.325 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:29.583 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:29.583 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:29.583 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:29.583 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:29.583 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:29.583 [472/740] Generating lib/rte_fib_def with a custom command 00:03:29.843 [473/740] Generating lib/rte_fib_mingw with a custom command 00:03:29.843 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:30.102 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:30.102 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:30.102 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:30.362 [478/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:30.362 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:30.362 [480/740] Linking static target lib/librte_ipsec.a 00:03:30.620 [481/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:30.620 [482/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:30.620 [483/740] Linking static target lib/librte_fib.a 00:03:30.620 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:30.620 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.620 [486/740] Linking target lib/librte_ipsec.so.23.0 00:03:30.880 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:30.880 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:30.880 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:30.880 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.880 [491/740] Linking target lib/librte_fib.so.23.0 00:03:31.451 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:31.451 [493/740] Generating lib/rte_port_def with a custom command 00:03:31.451 [494/740] Generating lib/rte_port_mingw with a custom command 00:03:31.451 [495/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:31.451 [496/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:31.451 [497/740] Generating lib/rte_pdump_def with a custom command 00:03:31.451 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:31.451 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:03:31.451 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:31.710 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:31.710 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:31.710 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:31.710 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:31.710 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:31.970 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:32.231 [507/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:32.231 [508/740] Linking static target lib/librte_port.a 00:03:32.231 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:32.231 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:32.231 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:32.231 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:32.491 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:32.491 [514/740] Linking static target lib/librte_pdump.a 00:03:32.491 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.751 [516/740] Linking target lib/librte_port.so.23.0 00:03:32.751 [517/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:32.751 [518/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:32.751 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:32.751 [520/740] Generating lib/rte_table_def with a custom command 00:03:32.751 [521/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.751 [522/740] Generating lib/rte_table_mingw with a custom command 00:03:32.751 [523/740] Linking target lib/librte_pdump.so.23.0 00:03:33.009 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:33.009 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:33.269 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:33.269 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:33.269 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:33.269 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:33.269 [530/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:33.269 [531/740] Generating lib/rte_pipeline_def with a custom command 00:03:33.269 [532/740] Linking static target lib/librte_table.a 00:03:33.269 [533/740] Generating lib/rte_pipeline_mingw with a custom command 00:03:33.838 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:33.838 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:33.838 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:34.097 [537/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.097 [538/740] Linking target lib/librte_table.so.23.0 00:03:34.097 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:34.097 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:34.356 [541/740] Generating lib/rte_graph_def with a custom command 00:03:34.356 [542/740] Generating lib/rte_graph_mingw with a custom command 00:03:34.356 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:34.356 [544/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:34.356 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:34.615 [546/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:34.615 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:34.615 [548/740] Linking static target lib/librte_graph.a 00:03:34.615 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:34.875 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:34.875 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:34.875 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:35.134 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:35.134 [554/740] Generating lib/rte_node_def with a custom command 00:03:35.134 [555/740] Generating lib/rte_node_mingw with a custom command 00:03:35.393 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.393 [557/740] Linking target lib/librte_graph.so.23.0 00:03:35.393 [558/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:35.393 [559/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:35.393 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:35.394 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:35.394 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:35.653 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:35.653 [564/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:35.653 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:03:35.653 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:35.653 [567/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:35.653 [568/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:35.653 [569/740] Generating drivers/rte_bus_vdev_def with a custom command 00:03:35.653 [570/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:35.653 [571/740] Linking static target lib/librte_node.a 00:03:35.653 [572/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:35.653 [573/740] Generating drivers/rte_mempool_ring_def with a custom command 00:03:35.653 [574/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:35.653 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:35.912 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:35.912 [577/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:35.912 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:35.913 [579/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.913 [580/740] Linking target lib/librte_node.so.23.0 00:03:35.913 [581/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:35.913 [582/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:36.172 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:36.172 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.172 [585/740] Linking static target drivers/librte_bus_vdev.a 00:03:36.172 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:36.172 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.172 [588/740] Linking static target drivers/librte_bus_pci.a 00:03:36.172 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.431 [590/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.431 [591/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:36.431 [592/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.431 [593/740] Linking target drivers/librte_bus_vdev.so.23.0 00:03:36.431 [594/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.431 [595/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:36.431 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:36.431 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:03:36.690 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:36.690 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:36.690 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:36.690 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:36.690 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:36.962 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:36.962 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.962 [605/740] Linking static target drivers/librte_mempool_ring.a 00:03:36.962 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.962 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:03:37.222 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:37.481 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:37.741 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:37.741 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:38.001 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:38.570 [613/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:38.570 [614/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:38.570 [615/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:38.571 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:38.831 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:38.831 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:38.831 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:03:39.090 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:39.090 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:39.348 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:39.916 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:40.175 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:40.175 [625/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:40.175 [626/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:40.175 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:40.175 [628/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:40.175 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:40.175 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:40.435 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:40.435 [632/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:40.694 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:40.953 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:41.211 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:41.211 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:41.211 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:41.469 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:41.469 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:41.469 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:41.469 [641/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:41.469 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:41.469 [643/740] Linking static target drivers/librte_net_i40e.a 00:03:41.728 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:41.728 [645/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:41.728 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:41.728 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:41.986 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:42.244 [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.244 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:42.244 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:42.244 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:42.503 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:42.503 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:42.503 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:42.763 [656/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:42.763 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:42.763 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:42.763 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:43.022 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:43.022 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:43.022 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:43.281 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:43.539 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:43.539 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:43.539 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:43.798 [667/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:43.798 [668/740] Linking static target lib/librte_vhost.a 00:03:44.057 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:44.057 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:44.317 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:44.576 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:44.576 [673/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:44.576 [674/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:44.576 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:44.834 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:44.834 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:44.834 [678/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.834 [679/740] Linking target lib/librte_vhost.so.23.0 00:03:45.092 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:45.092 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:45.092 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:45.092 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:45.351 [684/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:45.351 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:45.608 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:45.608 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:45.608 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:45.608 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:45.608 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:46.174 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:46.174 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:46.174 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:46.174 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:46.433 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:46.433 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:46.690 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:46.948 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:46.948 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:46.948 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:46.948 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:47.513 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:47.513 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:47.513 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:47.513 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:47.770 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:48.028 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:48.028 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:48.285 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:48.543 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:48.543 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:48.802 [712/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:48.802 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:48.802 [714/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:48.802 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:48.802 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:49.372 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:49.372 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:49.632 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:49.632 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:49.632 [721/740] Linking static target lib/librte_pipeline.a 00:03:50.200 [722/740] Linking target app/dpdk-test-cmdline 00:03:50.200 [723/740] Linking target app/dpdk-test-crypto-perf 00:03:50.200 [724/740] Linking target app/dpdk-test-acl 00:03:50.200 [725/740] Linking target app/dpdk-proc-info 00:03:50.200 [726/740] Linking target app/dpdk-test-compress-perf 00:03:50.200 [727/740] Linking target app/dpdk-test-bbdev 00:03:50.200 [728/740] Linking target app/dpdk-dumpcap 00:03:50.200 [729/740] Linking target app/dpdk-pdump 00:03:50.200 [730/740] Linking target app/dpdk-test-eventdev 00:03:50.458 [731/740] Linking target app/dpdk-test-fib 00:03:50.458 [732/740] Linking target app/dpdk-test-flow-perf 00:03:50.458 [733/740] Linking target app/dpdk-test-pipeline 00:03:50.458 [734/740] Linking target app/dpdk-test-gpudev 00:03:50.458 [735/740] Linking target app/dpdk-test-regex 00:03:50.717 [736/740] Linking target app/dpdk-test-sad 00:03:50.717 [737/740] Linking target app/dpdk-testpmd 00:03:50.717 [738/740] Linking target app/dpdk-test-security-perf 00:03:54.911 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.911 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:54.911 08:25:16 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:55.169 08:25:16 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:55.169 08:25:16 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:55.169 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:55.169 [0/1] Installing files. 00:03:55.431 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.431 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.432 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:55.433 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:55.434 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:55.435 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:55.435 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.435 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.695 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.695 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.695 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:55.695 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.695 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.695 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.956 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.957 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.958 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:55.959 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:55.959 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:55.959 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:55.959 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:55.959 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:55.959 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:55.959 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:55.959 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:55.959 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:55.959 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:55.959 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:55.959 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:55.959 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:55.959 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:55.959 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:55.959 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:55.959 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:55.959 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:55.959 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:55.959 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:55.959 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:55.959 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:55.959 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:55.959 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:55.959 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:55.959 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:55.959 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:55.959 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:55.959 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:55.959 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:55.959 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:55.959 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:55.959 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:55.959 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:55.959 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:55.959 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:55.959 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:55.959 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:55.959 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:55.959 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:55.959 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:55.959 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:55.959 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:55.959 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:55.959 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:55.959 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:55.959 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:55.959 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:55.959 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:55.959 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:55.959 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:55.959 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:55.959 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:55.959 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:55.959 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:55.959 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:55.959 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:55.959 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:55.959 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:55.960 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:55.960 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:55.960 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:55.960 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:55.960 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:55.960 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:55.960 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:55.960 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:55.960 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:55.960 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:55.960 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:55.960 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:55.960 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:55.960 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:55.960 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:55.960 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:55.960 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:55.960 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:55.960 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:55.960 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:55.960 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:55.960 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:55.960 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:55.960 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:55.960 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:55.960 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:55.960 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:55.960 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:55.960 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:55.960 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:55.960 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:55.960 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:55.960 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:55.960 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:55.960 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:55.960 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:55.960 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:55.960 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:55.960 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:55.960 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:55.960 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:55.960 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:55.960 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:55.960 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:55.960 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:55.960 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:55.960 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:55.960 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:55.960 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:55.960 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:55.960 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:55.960 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:55.960 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:55.960 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:55.960 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:55.960 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:55.960 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:55.960 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:55.960 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:55.960 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:55.960 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:55.960 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:55.960 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:55.960 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:55.960 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:55.960 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:55.960 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:55.960 08:25:17 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:55.960 08:25:17 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:55.960 00:03:55.960 real 0m50.573s 00:03:55.960 user 5m9.601s 00:03:55.960 sys 0m58.422s 00:03:55.960 08:25:17 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:55.960 08:25:17 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:55.960 ************************************ 00:03:55.960 END TEST build_native_dpdk 00:03:55.960 ************************************ 00:03:56.219 08:25:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:56.219 08:25:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:56.219 08:25:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:56.219 08:25:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:56.219 08:25:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:56.219 08:25:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:56.219 08:25:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:56.219 08:25:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:56.219 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:56.478 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:56.478 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:56.478 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:56.737 Using 'verbs' RDMA provider 00:04:12.999 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:27.909 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:28.171 Creating mk/config.mk...done. 00:04:28.171 Creating mk/cc.flags.mk...done. 00:04:28.171 Type 'make' to build. 00:04:28.171 08:25:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:28.171 08:25:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:28.171 08:25:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:28.171 08:25:49 -- common/autotest_common.sh@10 -- $ set +x 00:04:28.171 ************************************ 00:04:28.171 START TEST make 00:04:28.171 ************************************ 00:04:28.171 08:25:49 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:28.430 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:28.430 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:28.430 meson setup builddir \ 00:04:28.430 -Dwith-libaio=enabled \ 00:04:28.430 -Dwith-liburing=enabled \ 00:04:28.430 -Dwith-libvfn=disabled \ 00:04:28.430 -Dwith-spdk=disabled \ 00:04:28.430 -Dexamples=false \ 00:04:28.430 -Dtests=false \ 00:04:28.430 -Dtools=false && \ 00:04:28.430 meson compile -C builddir && \ 00:04:28.430 cd -) 00:04:28.430 make[1]: Nothing to be done for 'all'. 00:04:30.967 The Meson build system 00:04:30.967 Version: 1.5.0 00:04:30.967 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:30.967 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:30.967 Build type: native build 00:04:30.967 Project name: xnvme 00:04:30.967 Project version: 0.7.5 00:04:30.967 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:30.967 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:30.967 Host machine cpu family: x86_64 00:04:30.967 Host machine cpu: x86_64 00:04:30.967 Message: host_machine.system: linux 00:04:30.967 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:30.967 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:30.967 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:30.967 Run-time dependency threads found: YES 00:04:30.967 Has header "setupapi.h" : NO 00:04:30.967 Has header "linux/blkzoned.h" : YES 00:04:30.967 Has header "linux/blkzoned.h" : YES (cached) 00:04:30.967 Has header "libaio.h" : YES 00:04:30.967 Library aio found: YES 00:04:30.967 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:30.967 Run-time dependency liburing found: YES 2.2 00:04:30.967 Dependency libvfn skipped: feature with-libvfn disabled 00:04:30.967 Found CMake: /usr/bin/cmake (3.27.7) 00:04:30.967 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:30.967 Subproject spdk : skipped: feature with-spdk disabled 00:04:30.967 Run-time dependency appleframeworks found: NO (tried framework) 00:04:30.967 Run-time dependency appleframeworks found: NO (tried framework) 00:04:30.967 Library rt found: YES 00:04:30.967 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:30.967 Configuring xnvme_config.h using configuration 00:04:30.967 Configuring xnvme.spec using configuration 00:04:30.967 Run-time dependency bash-completion found: YES 2.11 00:04:30.967 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:30.967 Program cp found: YES (/usr/bin/cp) 00:04:30.967 Build targets in project: 3 00:04:30.967 00:04:30.967 xnvme 0.7.5 00:04:30.967 00:04:30.967 Subprojects 00:04:30.967 spdk : NO Feature 'with-spdk' disabled 00:04:30.967 00:04:30.967 User defined options 00:04:30.967 examples : false 00:04:30.967 tests : false 00:04:30.967 tools : false 00:04:30.967 with-libaio : enabled 00:04:30.967 with-liburing: enabled 00:04:30.967 with-libvfn : disabled 00:04:30.967 with-spdk : disabled 00:04:30.967 00:04:30.967 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:31.226 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:31.226 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:31.227 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:31.227 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:31.227 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:31.227 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:31.227 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:31.227 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:31.227 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:31.227 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:31.227 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:31.227 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:31.227 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:31.227 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:31.485 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:31.485 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:31.485 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:31.485 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:31.485 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:31.485 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:31.485 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:31.485 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:31.486 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:31.486 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:31.486 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:31.486 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:31.486 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:31.486 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:31.486 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:31.486 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:31.486 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:31.486 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:31.486 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:31.486 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:31.486 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:31.486 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:31.486 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:31.486 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:31.486 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:31.486 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:31.486 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:31.486 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:31.486 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:31.745 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:31.745 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:31.745 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:31.745 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:31.745 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:31.745 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:31.745 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:31.745 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:31.745 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:31.745 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:31.745 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:31.745 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:31.745 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:31.745 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:31.745 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:31.745 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:31.745 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:31.745 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:31.745 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:31.745 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:31.745 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:31.745 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:31.745 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:31.745 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:32.004 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:32.004 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:32.004 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:32.004 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:32.004 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:32.004 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:32.004 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:32.263 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:32.522 [75/76] Linking static target lib/libxnvme.a 00:04:32.522 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:32.522 INFO: autodetecting backend as ninja 00:04:32.522 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:32.522 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:28.775 CC lib/log/log.o 00:05:28.775 CC lib/log/log_flags.o 00:05:28.775 CC lib/log/log_deprecated.o 00:05:28.775 CC lib/ut/ut.o 00:05:28.775 CC lib/ut_mock/mock.o 00:05:28.775 LIB libspdk_ut.a 00:05:28.775 LIB libspdk_log.a 00:05:28.775 SO libspdk_ut.so.2.0 00:05:28.775 LIB libspdk_ut_mock.a 00:05:28.775 SO libspdk_log.so.7.1 00:05:28.775 SO libspdk_ut_mock.so.6.0 00:05:28.775 SYMLINK libspdk_ut.so 00:05:28.775 SYMLINK libspdk_log.so 00:05:28.775 SYMLINK libspdk_ut_mock.so 00:05:28.775 CC lib/dma/dma.o 00:05:28.775 CC lib/ioat/ioat.o 00:05:28.775 CXX lib/trace_parser/trace.o 00:05:28.775 CC lib/util/base64.o 00:05:28.775 CC lib/util/bit_array.o 00:05:28.775 CC lib/util/cpuset.o 00:05:28.775 CC lib/util/crc16.o 00:05:28.775 CC lib/util/crc32.o 00:05:28.775 CC lib/util/crc32c.o 00:05:28.775 CC lib/vfio_user/host/vfio_user_pci.o 00:05:28.775 CC lib/util/crc32_ieee.o 00:05:28.775 CC lib/util/crc64.o 00:05:28.775 CC lib/util/dif.o 00:05:28.775 LIB libspdk_dma.a 00:05:28.775 SO libspdk_dma.so.5.0 00:05:28.775 CC lib/util/fd.o 00:05:28.775 CC lib/util/fd_group.o 00:05:28.775 CC lib/util/file.o 00:05:28.775 SYMLINK libspdk_dma.so 00:05:28.775 CC lib/util/hexlify.o 00:05:28.775 CC lib/util/iov.o 00:05:28.775 CC lib/util/math.o 00:05:28.775 LIB libspdk_ioat.a 00:05:28.775 SO libspdk_ioat.so.7.0 00:05:28.775 CC lib/vfio_user/host/vfio_user.o 00:05:28.775 CC lib/util/net.o 00:05:28.775 SYMLINK libspdk_ioat.so 00:05:28.775 CC lib/util/pipe.o 00:05:28.775 CC lib/util/strerror_tls.o 00:05:28.775 CC lib/util/string.o 00:05:28.775 CC lib/util/uuid.o 00:05:28.775 CC lib/util/xor.o 00:05:28.775 CC lib/util/zipf.o 00:05:28.775 CC lib/util/md5.o 00:05:28.775 LIB libspdk_vfio_user.a 00:05:28.775 SO libspdk_vfio_user.so.5.0 00:05:28.775 SYMLINK libspdk_vfio_user.so 00:05:28.775 LIB libspdk_util.a 00:05:28.775 SO libspdk_util.so.10.1 00:05:28.775 LIB libspdk_trace_parser.a 00:05:28.775 SO libspdk_trace_parser.so.6.0 00:05:28.775 SYMLINK libspdk_util.so 00:05:28.775 SYMLINK libspdk_trace_parser.so 00:05:28.775 CC lib/conf/conf.o 00:05:28.775 CC lib/rdma_utils/rdma_utils.o 00:05:28.775 CC lib/json/json_parse.o 00:05:28.775 CC lib/json/json_write.o 00:05:28.775 CC lib/idxd/idxd.o 00:05:28.775 CC lib/json/json_util.o 00:05:28.775 CC lib/idxd/idxd_user.o 00:05:28.775 CC lib/idxd/idxd_kernel.o 00:05:28.775 CC lib/vmd/vmd.o 00:05:28.775 CC lib/env_dpdk/env.o 00:05:28.775 CC lib/vmd/led.o 00:05:28.775 LIB libspdk_conf.a 00:05:28.775 SO libspdk_conf.so.6.0 00:05:28.775 CC lib/env_dpdk/memory.o 00:05:28.775 CC lib/env_dpdk/pci.o 00:05:28.775 CC lib/env_dpdk/init.o 00:05:28.775 LIB libspdk_rdma_utils.a 00:05:28.775 SYMLINK libspdk_conf.so 00:05:28.775 CC lib/env_dpdk/threads.o 00:05:28.775 LIB libspdk_json.a 00:05:28.775 SO libspdk_rdma_utils.so.1.0 00:05:28.775 SO libspdk_json.so.6.0 00:05:28.775 CC lib/env_dpdk/pci_ioat.o 00:05:28.775 SYMLINK libspdk_rdma_utils.so 00:05:28.775 CC lib/env_dpdk/pci_virtio.o 00:05:28.775 SYMLINK libspdk_json.so 00:05:28.775 CC lib/env_dpdk/pci_vmd.o 00:05:28.775 CC lib/env_dpdk/pci_idxd.o 00:05:28.775 CC lib/env_dpdk/pci_event.o 00:05:28.775 CC lib/env_dpdk/sigbus_handler.o 00:05:28.775 CC lib/env_dpdk/pci_dpdk.o 00:05:28.775 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:28.775 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:28.775 LIB libspdk_idxd.a 00:05:28.775 SO libspdk_idxd.so.12.1 00:05:28.775 CC lib/rdma_provider/common.o 00:05:28.775 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:28.775 CC lib/jsonrpc/jsonrpc_server.o 00:05:28.775 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:28.775 CC lib/jsonrpc/jsonrpc_client.o 00:05:28.775 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:28.775 SYMLINK libspdk_idxd.so 00:05:28.775 LIB libspdk_vmd.a 00:05:28.775 SO libspdk_vmd.so.6.0 00:05:28.775 LIB libspdk_rdma_provider.a 00:05:28.775 SO libspdk_rdma_provider.so.7.0 00:05:28.775 SYMLINK libspdk_vmd.so 00:05:28.775 LIB libspdk_jsonrpc.a 00:05:28.775 SYMLINK libspdk_rdma_provider.so 00:05:28.775 SO libspdk_jsonrpc.so.6.0 00:05:28.775 SYMLINK libspdk_jsonrpc.so 00:05:28.775 CC lib/rpc/rpc.o 00:05:28.775 LIB libspdk_env_dpdk.a 00:05:28.775 LIB libspdk_rpc.a 00:05:28.775 SO libspdk_rpc.so.6.0 00:05:28.775 SO libspdk_env_dpdk.so.15.1 00:05:28.775 SYMLINK libspdk_rpc.so 00:05:28.775 SYMLINK libspdk_env_dpdk.so 00:05:28.775 CC lib/keyring/keyring.o 00:05:28.775 CC lib/keyring/keyring_rpc.o 00:05:28.775 CC lib/trace/trace_flags.o 00:05:28.775 CC lib/trace/trace.o 00:05:28.775 CC lib/trace/trace_rpc.o 00:05:28.775 CC lib/notify/notify.o 00:05:28.775 CC lib/notify/notify_rpc.o 00:05:28.775 LIB libspdk_notify.a 00:05:28.775 SO libspdk_notify.so.6.0 00:05:28.775 LIB libspdk_keyring.a 00:05:28.775 LIB libspdk_trace.a 00:05:28.775 SO libspdk_keyring.so.2.0 00:05:28.775 SYMLINK libspdk_notify.so 00:05:28.775 SO libspdk_trace.so.11.0 00:05:28.775 SYMLINK libspdk_keyring.so 00:05:28.775 SYMLINK libspdk_trace.so 00:05:28.775 CC lib/sock/sock.o 00:05:28.775 CC lib/sock/sock_rpc.o 00:05:28.775 CC lib/thread/iobuf.o 00:05:28.775 CC lib/thread/thread.o 00:05:28.775 LIB libspdk_sock.a 00:05:28.775 SO libspdk_sock.so.10.0 00:05:28.775 SYMLINK libspdk_sock.so 00:05:28.775 CC lib/nvme/nvme_fabric.o 00:05:28.775 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:28.775 CC lib/nvme/nvme_ctrlr.o 00:05:28.775 CC lib/nvme/nvme_pcie_common.o 00:05:28.775 CC lib/nvme/nvme_ns_cmd.o 00:05:28.775 CC lib/nvme/nvme_pcie.o 00:05:28.775 CC lib/nvme/nvme_ns.o 00:05:28.775 CC lib/nvme/nvme_qpair.o 00:05:28.775 CC lib/nvme/nvme.o 00:05:28.775 CC lib/nvme/nvme_quirks.o 00:05:28.775 CC lib/nvme/nvme_transport.o 00:05:28.775 CC lib/nvme/nvme_discovery.o 00:05:28.775 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:28.775 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:28.775 CC lib/nvme/nvme_tcp.o 00:05:28.775 LIB libspdk_thread.a 00:05:28.775 SO libspdk_thread.so.11.0 00:05:28.775 CC lib/nvme/nvme_opal.o 00:05:28.775 SYMLINK libspdk_thread.so 00:05:28.775 CC lib/nvme/nvme_io_msg.o 00:05:28.775 CC lib/nvme/nvme_poll_group.o 00:05:28.775 CC lib/nvme/nvme_zns.o 00:05:28.775 CC lib/nvme/nvme_stubs.o 00:05:28.775 CC lib/nvme/nvme_auth.o 00:05:28.775 CC lib/nvme/nvme_cuse.o 00:05:28.775 CC lib/nvme/nvme_rdma.o 00:05:28.775 CC lib/accel/accel.o 00:05:28.775 CC lib/blob/blobstore.o 00:05:28.775 CC lib/init/json_config.o 00:05:28.775 CC lib/virtio/virtio.o 00:05:28.775 CC lib/fsdev/fsdev.o 00:05:28.775 CC lib/init/subsystem.o 00:05:29.034 CC lib/virtio/virtio_vhost_user.o 00:05:29.034 CC lib/blob/request.o 00:05:29.034 CC lib/init/subsystem_rpc.o 00:05:29.034 CC lib/accel/accel_rpc.o 00:05:29.293 CC lib/init/rpc.o 00:05:29.293 CC lib/blob/zeroes.o 00:05:29.293 CC lib/blob/blob_bs_dev.o 00:05:29.293 CC lib/virtio/virtio_vfio_user.o 00:05:29.293 LIB libspdk_init.a 00:05:29.551 CC lib/virtio/virtio_pci.o 00:05:29.551 SO libspdk_init.so.6.0 00:05:29.551 CC lib/accel/accel_sw.o 00:05:29.551 CC lib/fsdev/fsdev_io.o 00:05:29.551 CC lib/fsdev/fsdev_rpc.o 00:05:29.551 SYMLINK libspdk_init.so 00:05:29.828 CC lib/event/app.o 00:05:29.828 CC lib/event/reactor.o 00:05:29.828 CC lib/event/app_rpc.o 00:05:29.828 CC lib/event/log_rpc.o 00:05:29.828 LIB libspdk_nvme.a 00:05:29.828 LIB libspdk_virtio.a 00:05:29.828 SO libspdk_virtio.so.7.0 00:05:29.828 CC lib/event/scheduler_static.o 00:05:29.828 LIB libspdk_accel.a 00:05:30.096 SYMLINK libspdk_virtio.so 00:05:30.096 SO libspdk_accel.so.16.0 00:05:30.096 SO libspdk_nvme.so.15.0 00:05:30.096 LIB libspdk_fsdev.a 00:05:30.096 SYMLINK libspdk_accel.so 00:05:30.096 SO libspdk_fsdev.so.2.0 00:05:30.096 SYMLINK libspdk_fsdev.so 00:05:30.355 SYMLINK libspdk_nvme.so 00:05:30.356 CC lib/bdev/bdev.o 00:05:30.356 CC lib/bdev/bdev_rpc.o 00:05:30.356 CC lib/bdev/bdev_zone.o 00:05:30.356 CC lib/bdev/part.o 00:05:30.356 CC lib/bdev/scsi_nvme.o 00:05:30.356 LIB libspdk_event.a 00:05:30.356 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:30.356 SO libspdk_event.so.14.0 00:05:30.615 SYMLINK libspdk_event.so 00:05:31.184 LIB libspdk_fuse_dispatcher.a 00:05:31.443 SO libspdk_fuse_dispatcher.so.1.0 00:05:31.443 SYMLINK libspdk_fuse_dispatcher.so 00:05:32.382 LIB libspdk_blob.a 00:05:32.642 SO libspdk_blob.so.11.0 00:05:32.642 SYMLINK libspdk_blob.so 00:05:33.268 CC lib/blobfs/blobfs.o 00:05:33.268 CC lib/blobfs/tree.o 00:05:33.268 CC lib/lvol/lvol.o 00:05:33.528 LIB libspdk_bdev.a 00:05:33.528 SO libspdk_bdev.so.17.0 00:05:33.787 SYMLINK libspdk_bdev.so 00:05:33.787 CC lib/nvmf/ctrlr.o 00:05:33.787 CC lib/nvmf/ctrlr_discovery.o 00:05:33.787 CC lib/nvmf/subsystem.o 00:05:33.787 CC lib/nvmf/ctrlr_bdev.o 00:05:33.787 CC lib/nbd/nbd.o 00:05:33.787 CC lib/scsi/dev.o 00:05:34.047 CC lib/ftl/ftl_core.o 00:05:34.047 CC lib/ublk/ublk.o 00:05:34.047 LIB libspdk_blobfs.a 00:05:34.307 CC lib/scsi/lun.o 00:05:34.307 SO libspdk_blobfs.so.10.0 00:05:34.307 SYMLINK libspdk_blobfs.so 00:05:34.307 CC lib/ublk/ublk_rpc.o 00:05:34.307 LIB libspdk_lvol.a 00:05:34.307 SO libspdk_lvol.so.10.0 00:05:34.307 SYMLINK libspdk_lvol.so 00:05:34.307 CC lib/nbd/nbd_rpc.o 00:05:34.307 CC lib/ftl/ftl_init.o 00:05:34.567 CC lib/scsi/port.o 00:05:34.567 CC lib/ftl/ftl_layout.o 00:05:34.567 CC lib/ftl/ftl_debug.o 00:05:34.567 LIB libspdk_nbd.a 00:05:34.567 SO libspdk_nbd.so.7.0 00:05:34.567 CC lib/ftl/ftl_io.o 00:05:34.567 LIB libspdk_ublk.a 00:05:34.567 CC lib/ftl/ftl_sb.o 00:05:34.827 SYMLINK libspdk_nbd.so 00:05:34.827 CC lib/scsi/scsi.o 00:05:34.827 CC lib/ftl/ftl_l2p.o 00:05:34.827 SO libspdk_ublk.so.3.0 00:05:34.827 SYMLINK libspdk_ublk.so 00:05:34.827 CC lib/ftl/ftl_l2p_flat.o 00:05:34.827 CC lib/nvmf/nvmf.o 00:05:34.827 CC lib/ftl/ftl_nv_cache.o 00:05:34.827 CC lib/scsi/scsi_bdev.o 00:05:34.827 CC lib/scsi/scsi_pr.o 00:05:34.827 CC lib/scsi/scsi_rpc.o 00:05:34.827 CC lib/scsi/task.o 00:05:35.087 CC lib/ftl/ftl_band.o 00:05:35.087 CC lib/ftl/ftl_band_ops.o 00:05:35.087 CC lib/ftl/ftl_writer.o 00:05:35.087 CC lib/ftl/ftl_rq.o 00:05:35.347 CC lib/ftl/ftl_reloc.o 00:05:35.347 CC lib/ftl/ftl_l2p_cache.o 00:05:35.347 CC lib/nvmf/nvmf_rpc.o 00:05:35.347 CC lib/ftl/ftl_p2l.o 00:05:35.347 CC lib/ftl/ftl_p2l_log.o 00:05:35.607 CC lib/nvmf/transport.o 00:05:35.607 LIB libspdk_scsi.a 00:05:35.607 SO libspdk_scsi.so.9.0 00:05:35.867 SYMLINK libspdk_scsi.so 00:05:35.867 CC lib/ftl/mngt/ftl_mngt.o 00:05:35.867 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:35.867 CC lib/nvmf/tcp.o 00:05:36.126 CC lib/iscsi/conn.o 00:05:36.126 CC lib/vhost/vhost.o 00:05:36.126 CC lib/vhost/vhost_rpc.o 00:05:36.126 CC lib/vhost/vhost_scsi.o 00:05:36.126 CC lib/vhost/vhost_blk.o 00:05:36.126 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:36.385 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:36.385 CC lib/vhost/rte_vhost_user.o 00:05:36.385 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:36.385 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:36.644 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:36.902 CC lib/iscsi/init_grp.o 00:05:36.902 CC lib/nvmf/stubs.o 00:05:36.902 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:36.902 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:36.902 CC lib/nvmf/mdns_server.o 00:05:37.161 CC lib/nvmf/rdma.o 00:05:37.161 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:37.161 CC lib/iscsi/iscsi.o 00:05:37.162 CC lib/nvmf/auth.o 00:05:37.162 CC lib/iscsi/param.o 00:05:37.162 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:37.424 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:37.424 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:37.424 CC lib/ftl/utils/ftl_conf.o 00:05:37.729 CC lib/ftl/utils/ftl_md.o 00:05:37.729 CC lib/iscsi/portal_grp.o 00:05:37.729 LIB libspdk_vhost.a 00:05:37.729 CC lib/ftl/utils/ftl_mempool.o 00:05:37.729 CC lib/ftl/utils/ftl_bitmap.o 00:05:37.729 SO libspdk_vhost.so.8.0 00:05:37.988 CC lib/ftl/utils/ftl_property.o 00:05:37.988 SYMLINK libspdk_vhost.so 00:05:37.988 CC lib/iscsi/tgt_node.o 00:05:37.988 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:37.988 CC lib/iscsi/iscsi_subsystem.o 00:05:37.988 CC lib/iscsi/iscsi_rpc.o 00:05:37.988 CC lib/iscsi/task.o 00:05:38.247 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:38.247 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:38.247 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:38.247 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:38.506 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:38.506 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:38.506 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:38.506 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:38.506 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:38.506 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:38.765 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:38.765 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:38.765 CC lib/ftl/base/ftl_base_dev.o 00:05:38.765 CC lib/ftl/base/ftl_base_bdev.o 00:05:38.765 CC lib/ftl/ftl_trace.o 00:05:39.024 LIB libspdk_iscsi.a 00:05:39.024 SO libspdk_iscsi.so.8.0 00:05:39.024 LIB libspdk_ftl.a 00:05:39.282 SYMLINK libspdk_iscsi.so 00:05:39.282 SO libspdk_ftl.so.9.0 00:05:39.564 SYMLINK libspdk_ftl.so 00:05:40.130 LIB libspdk_nvmf.a 00:05:40.387 SO libspdk_nvmf.so.20.0 00:05:40.646 SYMLINK libspdk_nvmf.so 00:05:40.915 CC module/env_dpdk/env_dpdk_rpc.o 00:05:41.184 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:41.184 CC module/scheduler/gscheduler/gscheduler.o 00:05:41.184 CC module/sock/posix/posix.o 00:05:41.184 CC module/keyring/linux/keyring.o 00:05:41.184 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:41.184 CC module/fsdev/aio/fsdev_aio.o 00:05:41.184 CC module/keyring/file/keyring.o 00:05:41.184 CC module/blob/bdev/blob_bdev.o 00:05:41.184 CC module/accel/error/accel_error.o 00:05:41.184 LIB libspdk_env_dpdk_rpc.a 00:05:41.184 SO libspdk_env_dpdk_rpc.so.6.0 00:05:41.184 CC module/keyring/linux/keyring_rpc.o 00:05:41.184 LIB libspdk_scheduler_gscheduler.a 00:05:41.184 SYMLINK libspdk_env_dpdk_rpc.so 00:05:41.184 CC module/keyring/file/keyring_rpc.o 00:05:41.184 CC module/accel/error/accel_error_rpc.o 00:05:41.184 LIB libspdk_scheduler_dpdk_governor.a 00:05:41.184 SO libspdk_scheduler_gscheduler.so.4.0 00:05:41.184 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:41.184 LIB libspdk_scheduler_dynamic.a 00:05:41.184 SO libspdk_scheduler_dynamic.so.4.0 00:05:41.442 SYMLINK libspdk_scheduler_gscheduler.so 00:05:41.442 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:41.442 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:41.442 CC module/fsdev/aio/linux_aio_mgr.o 00:05:41.442 SYMLINK libspdk_scheduler_dynamic.so 00:05:41.442 LIB libspdk_keyring_linux.a 00:05:41.442 LIB libspdk_keyring_file.a 00:05:41.442 LIB libspdk_accel_error.a 00:05:41.442 SO libspdk_keyring_linux.so.1.0 00:05:41.442 SO libspdk_keyring_file.so.2.0 00:05:41.442 SO libspdk_accel_error.so.2.0 00:05:41.442 LIB libspdk_blob_bdev.a 00:05:41.442 SYMLINK libspdk_keyring_linux.so 00:05:41.442 SO libspdk_blob_bdev.so.11.0 00:05:41.442 SYMLINK libspdk_keyring_file.so 00:05:41.442 SYMLINK libspdk_accel_error.so 00:05:41.442 CC module/accel/ioat/accel_ioat.o 00:05:41.442 CC module/accel/ioat/accel_ioat_rpc.o 00:05:41.442 CC module/accel/dsa/accel_dsa.o 00:05:41.442 CC module/accel/dsa/accel_dsa_rpc.o 00:05:41.442 SYMLINK libspdk_blob_bdev.so 00:05:41.700 CC module/accel/iaa/accel_iaa.o 00:05:41.700 LIB libspdk_accel_ioat.a 00:05:41.700 SO libspdk_accel_ioat.so.6.0 00:05:41.700 CC module/bdev/error/vbdev_error.o 00:05:41.700 CC module/blobfs/bdev/blobfs_bdev.o 00:05:41.700 CC module/bdev/lvol/vbdev_lvol.o 00:05:41.958 SYMLINK libspdk_accel_ioat.so 00:05:41.958 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:41.959 CC module/bdev/gpt/gpt.o 00:05:41.959 CC module/bdev/delay/vbdev_delay.o 00:05:41.959 LIB libspdk_fsdev_aio.a 00:05:41.959 CC module/accel/iaa/accel_iaa_rpc.o 00:05:41.959 LIB libspdk_accel_dsa.a 00:05:41.959 SO libspdk_fsdev_aio.so.1.0 00:05:41.959 SO libspdk_accel_dsa.so.5.0 00:05:41.959 SYMLINK libspdk_fsdev_aio.so 00:05:41.959 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:41.959 LIB libspdk_blobfs_bdev.a 00:05:41.959 LIB libspdk_accel_iaa.a 00:05:41.959 SO libspdk_blobfs_bdev.so.6.0 00:05:41.959 CC module/bdev/gpt/vbdev_gpt.o 00:05:41.959 SYMLINK libspdk_accel_dsa.so 00:05:41.959 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:42.218 LIB libspdk_sock_posix.a 00:05:42.218 SO libspdk_accel_iaa.so.3.0 00:05:42.218 SO libspdk_sock_posix.so.6.0 00:05:42.218 SYMLINK libspdk_blobfs_bdev.so 00:05:42.218 CC module/bdev/error/vbdev_error_rpc.o 00:05:42.218 SYMLINK libspdk_accel_iaa.so 00:05:42.218 SYMLINK libspdk_sock_posix.so 00:05:42.218 CC module/bdev/malloc/bdev_malloc.o 00:05:42.218 LIB libspdk_bdev_delay.a 00:05:42.218 SO libspdk_bdev_delay.so.6.0 00:05:42.218 LIB libspdk_bdev_error.a 00:05:42.477 CC module/bdev/null/bdev_null.o 00:05:42.477 SO libspdk_bdev_error.so.6.0 00:05:42.477 CC module/bdev/nvme/bdev_nvme.o 00:05:42.477 LIB libspdk_bdev_gpt.a 00:05:42.477 CC module/bdev/passthru/vbdev_passthru.o 00:05:42.477 SYMLINK libspdk_bdev_delay.so 00:05:42.477 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:42.477 CC module/bdev/raid/bdev_raid.o 00:05:42.477 SO libspdk_bdev_gpt.so.6.0 00:05:42.477 SYMLINK libspdk_bdev_error.so 00:05:42.477 CC module/bdev/null/bdev_null_rpc.o 00:05:42.477 SYMLINK libspdk_bdev_gpt.so 00:05:42.477 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:42.477 LIB libspdk_bdev_lvol.a 00:05:42.477 SO libspdk_bdev_lvol.so.6.0 00:05:42.738 SYMLINK libspdk_bdev_lvol.so 00:05:42.738 CC module/bdev/split/vbdev_split.o 00:05:42.738 CC module/bdev/split/vbdev_split_rpc.o 00:05:42.738 LIB libspdk_bdev_null.a 00:05:42.738 SO libspdk_bdev_null.so.6.0 00:05:42.738 LIB libspdk_bdev_malloc.a 00:05:42.738 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:42.738 SYMLINK libspdk_bdev_null.so 00:05:42.738 SO libspdk_bdev_malloc.so.6.0 00:05:42.738 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:42.738 SYMLINK libspdk_bdev_malloc.so 00:05:42.738 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:42.738 CC module/bdev/xnvme/bdev_xnvme.o 00:05:42.998 LIB libspdk_bdev_split.a 00:05:42.998 SO libspdk_bdev_split.so.6.0 00:05:42.998 LIB libspdk_bdev_passthru.a 00:05:42.998 SO libspdk_bdev_passthru.so.6.0 00:05:42.998 CC module/bdev/aio/bdev_aio.o 00:05:42.998 CC module/bdev/ftl/bdev_ftl.o 00:05:42.998 SYMLINK libspdk_bdev_split.so 00:05:42.998 SYMLINK libspdk_bdev_passthru.so 00:05:42.998 CC module/bdev/aio/bdev_aio_rpc.o 00:05:43.259 CC module/bdev/iscsi/bdev_iscsi.o 00:05:43.259 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:43.259 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:43.259 LIB libspdk_bdev_zone_block.a 00:05:43.259 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:43.259 SO libspdk_bdev_zone_block.so.6.0 00:05:43.259 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:43.259 SYMLINK libspdk_bdev_zone_block.so 00:05:43.259 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:43.259 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:43.259 LIB libspdk_bdev_aio.a 00:05:43.259 CC module/bdev/raid/bdev_raid_rpc.o 00:05:43.259 LIB libspdk_bdev_xnvme.a 00:05:43.259 SO libspdk_bdev_aio.so.6.0 00:05:43.519 SO libspdk_bdev_xnvme.so.3.0 00:05:43.519 SYMLINK libspdk_bdev_aio.so 00:05:43.519 CC module/bdev/nvme/nvme_rpc.o 00:05:43.519 SYMLINK libspdk_bdev_xnvme.so 00:05:43.519 CC module/bdev/raid/bdev_raid_sb.o 00:05:43.519 CC module/bdev/raid/raid0.o 00:05:43.519 LIB libspdk_bdev_ftl.a 00:05:43.519 CC module/bdev/nvme/bdev_mdns_client.o 00:05:43.519 LIB libspdk_bdev_iscsi.a 00:05:43.519 SO libspdk_bdev_ftl.so.6.0 00:05:43.519 SO libspdk_bdev_iscsi.so.6.0 00:05:43.519 CC module/bdev/raid/raid1.o 00:05:43.778 SYMLINK libspdk_bdev_ftl.so 00:05:43.778 SYMLINK libspdk_bdev_iscsi.so 00:05:43.778 CC module/bdev/raid/concat.o 00:05:43.778 CC module/bdev/nvme/vbdev_opal.o 00:05:43.778 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:43.778 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:43.778 LIB libspdk_bdev_virtio.a 00:05:43.778 SO libspdk_bdev_virtio.so.6.0 00:05:44.037 SYMLINK libspdk_bdev_virtio.so 00:05:44.037 LIB libspdk_bdev_raid.a 00:05:44.037 SO libspdk_bdev_raid.so.6.0 00:05:44.298 SYMLINK libspdk_bdev_raid.so 00:05:46.204 LIB libspdk_bdev_nvme.a 00:05:46.204 SO libspdk_bdev_nvme.so.7.1 00:05:46.204 SYMLINK libspdk_bdev_nvme.so 00:05:46.771 CC module/event/subsystems/scheduler/scheduler.o 00:05:46.771 CC module/event/subsystems/sock/sock.o 00:05:46.771 CC module/event/subsystems/iobuf/iobuf.o 00:05:46.771 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:46.771 CC module/event/subsystems/keyring/keyring.o 00:05:46.771 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:46.771 CC module/event/subsystems/vmd/vmd.o 00:05:46.771 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:46.772 CC module/event/subsystems/fsdev/fsdev.o 00:05:46.772 LIB libspdk_event_scheduler.a 00:05:46.772 LIB libspdk_event_sock.a 00:05:46.772 LIB libspdk_event_keyring.a 00:05:46.772 SO libspdk_event_scheduler.so.4.0 00:05:46.772 LIB libspdk_event_vmd.a 00:05:46.772 LIB libspdk_event_vhost_blk.a 00:05:46.772 SO libspdk_event_sock.so.5.0 00:05:46.772 SO libspdk_event_keyring.so.1.0 00:05:46.772 SO libspdk_event_vmd.so.6.0 00:05:46.772 SO libspdk_event_vhost_blk.so.3.0 00:05:46.772 LIB libspdk_event_iobuf.a 00:05:46.772 LIB libspdk_event_fsdev.a 00:05:46.772 SYMLINK libspdk_event_scheduler.so 00:05:47.030 SYMLINK libspdk_event_sock.so 00:05:47.030 SYMLINK libspdk_event_keyring.so 00:05:47.030 SO libspdk_event_iobuf.so.3.0 00:05:47.030 SO libspdk_event_fsdev.so.1.0 00:05:47.030 SYMLINK libspdk_event_vhost_blk.so 00:05:47.030 SYMLINK libspdk_event_vmd.so 00:05:47.030 SYMLINK libspdk_event_fsdev.so 00:05:47.030 SYMLINK libspdk_event_iobuf.so 00:05:47.288 CC module/event/subsystems/accel/accel.o 00:05:47.546 LIB libspdk_event_accel.a 00:05:47.546 SO libspdk_event_accel.so.6.0 00:05:47.546 SYMLINK libspdk_event_accel.so 00:05:48.112 CC module/event/subsystems/bdev/bdev.o 00:05:48.112 LIB libspdk_event_bdev.a 00:05:48.371 SO libspdk_event_bdev.so.6.0 00:05:48.371 SYMLINK libspdk_event_bdev.so 00:05:48.632 CC module/event/subsystems/nbd/nbd.o 00:05:48.632 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:48.632 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:48.632 CC module/event/subsystems/scsi/scsi.o 00:05:48.632 CC module/event/subsystems/ublk/ublk.o 00:05:48.890 LIB libspdk_event_nbd.a 00:05:48.890 LIB libspdk_event_ublk.a 00:05:48.890 SO libspdk_event_nbd.so.6.0 00:05:48.890 SO libspdk_event_ublk.so.3.0 00:05:48.890 LIB libspdk_event_scsi.a 00:05:48.890 SYMLINK libspdk_event_ublk.so 00:05:48.890 SYMLINK libspdk_event_nbd.so 00:05:48.890 SO libspdk_event_scsi.so.6.0 00:05:48.890 LIB libspdk_event_nvmf.a 00:05:48.890 SO libspdk_event_nvmf.so.6.0 00:05:48.890 SYMLINK libspdk_event_scsi.so 00:05:49.149 SYMLINK libspdk_event_nvmf.so 00:05:49.408 CC module/event/subsystems/iscsi/iscsi.o 00:05:49.409 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:49.409 LIB libspdk_event_iscsi.a 00:05:49.409 LIB libspdk_event_vhost_scsi.a 00:05:49.409 SO libspdk_event_iscsi.so.6.0 00:05:49.667 SO libspdk_event_vhost_scsi.so.3.0 00:05:49.667 SYMLINK libspdk_event_iscsi.so 00:05:49.667 SYMLINK libspdk_event_vhost_scsi.so 00:05:49.926 SO libspdk.so.6.0 00:05:49.926 SYMLINK libspdk.so 00:05:50.190 CC app/trace_record/trace_record.o 00:05:50.190 CC test/rpc_client/rpc_client_test.o 00:05:50.190 TEST_HEADER include/spdk/accel.h 00:05:50.190 TEST_HEADER include/spdk/accel_module.h 00:05:50.190 CXX app/trace/trace.o 00:05:50.190 TEST_HEADER include/spdk/assert.h 00:05:50.190 TEST_HEADER include/spdk/barrier.h 00:05:50.190 TEST_HEADER include/spdk/base64.h 00:05:50.190 TEST_HEADER include/spdk/bdev.h 00:05:50.190 TEST_HEADER include/spdk/bdev_module.h 00:05:50.190 TEST_HEADER include/spdk/bdev_zone.h 00:05:50.190 TEST_HEADER include/spdk/bit_array.h 00:05:50.190 TEST_HEADER include/spdk/bit_pool.h 00:05:50.190 TEST_HEADER include/spdk/blob_bdev.h 00:05:50.190 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:50.190 CC app/nvmf_tgt/nvmf_main.o 00:05:50.190 TEST_HEADER include/spdk/blobfs.h 00:05:50.190 TEST_HEADER include/spdk/blob.h 00:05:50.190 TEST_HEADER include/spdk/conf.h 00:05:50.190 TEST_HEADER include/spdk/config.h 00:05:50.190 TEST_HEADER include/spdk/cpuset.h 00:05:50.190 TEST_HEADER include/spdk/crc16.h 00:05:50.190 TEST_HEADER include/spdk/crc32.h 00:05:50.190 TEST_HEADER include/spdk/crc64.h 00:05:50.190 TEST_HEADER include/spdk/dif.h 00:05:50.190 TEST_HEADER include/spdk/dma.h 00:05:50.190 TEST_HEADER include/spdk/endian.h 00:05:50.190 TEST_HEADER include/spdk/env_dpdk.h 00:05:50.190 TEST_HEADER include/spdk/env.h 00:05:50.190 TEST_HEADER include/spdk/event.h 00:05:50.190 CC test/thread/poller_perf/poller_perf.o 00:05:50.190 TEST_HEADER include/spdk/fd_group.h 00:05:50.190 TEST_HEADER include/spdk/fd.h 00:05:50.190 TEST_HEADER include/spdk/file.h 00:05:50.190 TEST_HEADER include/spdk/fsdev.h 00:05:50.190 TEST_HEADER include/spdk/fsdev_module.h 00:05:50.190 TEST_HEADER include/spdk/ftl.h 00:05:50.190 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:50.190 TEST_HEADER include/spdk/gpt_spec.h 00:05:50.190 TEST_HEADER include/spdk/hexlify.h 00:05:50.190 TEST_HEADER include/spdk/histogram_data.h 00:05:50.190 TEST_HEADER include/spdk/idxd.h 00:05:50.190 TEST_HEADER include/spdk/idxd_spec.h 00:05:50.190 TEST_HEADER include/spdk/init.h 00:05:50.190 TEST_HEADER include/spdk/ioat.h 00:05:50.190 TEST_HEADER include/spdk/ioat_spec.h 00:05:50.190 CC examples/util/zipf/zipf.o 00:05:50.190 TEST_HEADER include/spdk/iscsi_spec.h 00:05:50.190 TEST_HEADER include/spdk/json.h 00:05:50.190 TEST_HEADER include/spdk/jsonrpc.h 00:05:50.190 TEST_HEADER include/spdk/keyring.h 00:05:50.190 CC test/dma/test_dma/test_dma.o 00:05:50.190 TEST_HEADER include/spdk/keyring_module.h 00:05:50.190 CC test/app/bdev_svc/bdev_svc.o 00:05:50.190 TEST_HEADER include/spdk/likely.h 00:05:50.190 TEST_HEADER include/spdk/log.h 00:05:50.190 TEST_HEADER include/spdk/lvol.h 00:05:50.190 TEST_HEADER include/spdk/md5.h 00:05:50.190 TEST_HEADER include/spdk/memory.h 00:05:50.190 TEST_HEADER include/spdk/mmio.h 00:05:50.190 TEST_HEADER include/spdk/nbd.h 00:05:50.190 TEST_HEADER include/spdk/net.h 00:05:50.190 TEST_HEADER include/spdk/notify.h 00:05:50.190 TEST_HEADER include/spdk/nvme.h 00:05:50.190 TEST_HEADER include/spdk/nvme_intel.h 00:05:50.190 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:50.190 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:50.190 TEST_HEADER include/spdk/nvme_spec.h 00:05:50.190 TEST_HEADER include/spdk/nvme_zns.h 00:05:50.190 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:50.459 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:50.459 TEST_HEADER include/spdk/nvmf.h 00:05:50.459 TEST_HEADER include/spdk/nvmf_spec.h 00:05:50.459 TEST_HEADER include/spdk/nvmf_transport.h 00:05:50.459 TEST_HEADER include/spdk/opal.h 00:05:50.459 TEST_HEADER include/spdk/opal_spec.h 00:05:50.459 TEST_HEADER include/spdk/pci_ids.h 00:05:50.459 CC test/env/mem_callbacks/mem_callbacks.o 00:05:50.459 TEST_HEADER include/spdk/pipe.h 00:05:50.459 TEST_HEADER include/spdk/queue.h 00:05:50.459 TEST_HEADER include/spdk/reduce.h 00:05:50.459 TEST_HEADER include/spdk/rpc.h 00:05:50.459 TEST_HEADER include/spdk/scheduler.h 00:05:50.459 TEST_HEADER include/spdk/scsi.h 00:05:50.459 TEST_HEADER include/spdk/scsi_spec.h 00:05:50.459 LINK rpc_client_test 00:05:50.459 TEST_HEADER include/spdk/sock.h 00:05:50.459 TEST_HEADER include/spdk/stdinc.h 00:05:50.459 TEST_HEADER include/spdk/string.h 00:05:50.459 TEST_HEADER include/spdk/thread.h 00:05:50.459 TEST_HEADER include/spdk/trace.h 00:05:50.459 TEST_HEADER include/spdk/trace_parser.h 00:05:50.459 TEST_HEADER include/spdk/tree.h 00:05:50.459 TEST_HEADER include/spdk/ublk.h 00:05:50.459 TEST_HEADER include/spdk/util.h 00:05:50.459 TEST_HEADER include/spdk/uuid.h 00:05:50.459 TEST_HEADER include/spdk/version.h 00:05:50.459 LINK poller_perf 00:05:50.459 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:50.459 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:50.459 TEST_HEADER include/spdk/vhost.h 00:05:50.459 TEST_HEADER include/spdk/vmd.h 00:05:50.459 TEST_HEADER include/spdk/xor.h 00:05:50.459 TEST_HEADER include/spdk/zipf.h 00:05:50.459 CXX test/cpp_headers/accel.o 00:05:50.459 LINK nvmf_tgt 00:05:50.459 LINK zipf 00:05:50.459 LINK spdk_trace_record 00:05:50.459 LINK bdev_svc 00:05:50.459 CXX test/cpp_headers/accel_module.o 00:05:50.459 LINK mem_callbacks 00:05:50.719 LINK spdk_trace 00:05:50.719 CC test/app/histogram_perf/histogram_perf.o 00:05:50.719 CXX test/cpp_headers/assert.o 00:05:50.719 CC test/app/jsoncat/jsoncat.o 00:05:50.719 CC test/app/stub/stub.o 00:05:50.719 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:50.719 CC test/env/vtophys/vtophys.o 00:05:50.719 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:50.719 CC examples/ioat/perf/perf.o 00:05:50.719 LINK test_dma 00:05:50.979 LINK histogram_perf 00:05:50.979 LINK jsoncat 00:05:50.979 CXX test/cpp_headers/barrier.o 00:05:50.979 LINK stub 00:05:50.979 LINK vtophys 00:05:50.979 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.979 LINK env_dpdk_post_init 00:05:50.979 CXX test/cpp_headers/base64.o 00:05:50.979 LINK ioat_perf 00:05:50.979 CXX test/cpp_headers/bdev.o 00:05:50.979 CXX test/cpp_headers/bdev_module.o 00:05:50.979 CXX test/cpp_headers/bdev_zone.o 00:05:50.979 CC test/env/memory/memory_ut.o 00:05:51.239 CXX test/cpp_headers/bit_array.o 00:05:51.239 CC test/env/pci/pci_ut.o 00:05:51.239 LINK iscsi_tgt 00:05:51.239 LINK nvme_fuzz 00:05:51.239 CC examples/ioat/verify/verify.o 00:05:51.239 CXX test/cpp_headers/bit_pool.o 00:05:51.498 CC app/spdk_lspci/spdk_lspci.o 00:05:51.498 CC app/spdk_tgt/spdk_tgt.o 00:05:51.498 CC test/event/event_perf/event_perf.o 00:05:51.498 CC test/event/reactor/reactor.o 00:05:51.498 CXX test/cpp_headers/blob_bdev.o 00:05:51.498 CC test/nvme/aer/aer.o 00:05:51.498 LINK spdk_lspci 00:05:51.498 LINK verify 00:05:51.498 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:51.498 LINK event_perf 00:05:51.498 LINK reactor 00:05:51.498 LINK pci_ut 00:05:51.498 LINK spdk_tgt 00:05:51.757 CXX test/cpp_headers/blobfs_bdev.o 00:05:51.757 LINK aer 00:05:51.757 CC test/event/reactor_perf/reactor_perf.o 00:05:52.016 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:52.016 CXX test/cpp_headers/blobfs.o 00:05:52.016 CC examples/idxd/perf/perf.o 00:05:52.016 CC app/spdk_nvme_perf/perf.o 00:05:52.016 CC examples/vmd/lsvmd/lsvmd.o 00:05:52.016 LINK reactor_perf 00:05:52.016 CC test/event/app_repeat/app_repeat.o 00:05:52.016 LINK memory_ut 00:05:52.016 CXX test/cpp_headers/blob.o 00:05:52.016 LINK interrupt_tgt 00:05:52.016 LINK lsvmd 00:05:52.016 CC test/nvme/reset/reset.o 00:05:52.275 LINK app_repeat 00:05:52.275 CXX test/cpp_headers/conf.o 00:05:52.275 CC test/nvme/sgl/sgl.o 00:05:52.275 CXX test/cpp_headers/config.o 00:05:52.275 CXX test/cpp_headers/cpuset.o 00:05:52.275 CC examples/vmd/led/led.o 00:05:52.275 LINK idxd_perf 00:05:52.275 LINK reset 00:05:52.275 CC test/nvme/e2edp/nvme_dp.o 00:05:52.534 CXX test/cpp_headers/crc16.o 00:05:52.534 LINK led 00:05:52.534 CC test/event/scheduler/scheduler.o 00:05:52.534 CC app/spdk_nvme_identify/identify.o 00:05:52.534 CC app/spdk_nvme_discover/discovery_aer.o 00:05:52.534 LINK sgl 00:05:52.534 CXX test/cpp_headers/crc32.o 00:05:52.534 CC app/spdk_top/spdk_top.o 00:05:52.792 LINK nvme_dp 00:05:52.792 CXX test/cpp_headers/crc64.o 00:05:52.792 LINK scheduler 00:05:52.792 LINK spdk_nvme_discover 00:05:52.792 CC examples/thread/thread/thread_ex.o 00:05:52.792 CC examples/sock/hello_world/hello_sock.o 00:05:52.792 CXX test/cpp_headers/dif.o 00:05:52.792 LINK spdk_nvme_perf 00:05:53.052 CC test/nvme/overhead/overhead.o 00:05:53.052 CC app/vhost/vhost.o 00:05:53.052 CC app/spdk_dd/spdk_dd.o 00:05:53.052 CXX test/cpp_headers/dma.o 00:05:53.052 LINK thread 00:05:53.052 LINK hello_sock 00:05:53.311 LINK vhost 00:05:53.311 CXX test/cpp_headers/endian.o 00:05:53.311 LINK overhead 00:05:53.311 CC app/fio/nvme/fio_plugin.o 00:05:53.311 CC app/fio/bdev/fio_plugin.o 00:05:53.311 CXX test/cpp_headers/env_dpdk.o 00:05:53.570 CC examples/nvme/hello_world/hello_world.o 00:05:53.570 LINK spdk_dd 00:05:53.570 LINK spdk_nvme_identify 00:05:53.570 CC test/nvme/err_injection/err_injection.o 00:05:53.570 CC examples/nvme/reconnect/reconnect.o 00:05:53.570 CXX test/cpp_headers/env.o 00:05:53.570 LINK iscsi_fuzz 00:05:53.570 CXX test/cpp_headers/event.o 00:05:53.570 LINK spdk_top 00:05:53.830 LINK hello_world 00:05:53.830 LINK err_injection 00:05:53.830 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:53.830 CXX test/cpp_headers/fd_group.o 00:05:53.830 CXX test/cpp_headers/fd.o 00:05:53.830 LINK reconnect 00:05:53.830 LINK spdk_nvme 00:05:54.089 LINK spdk_bdev 00:05:54.089 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:54.089 CC test/accel/dif/dif.o 00:05:54.089 CC test/nvme/startup/startup.o 00:05:54.089 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:54.089 CXX test/cpp_headers/file.o 00:05:54.089 CC test/blobfs/mkfs/mkfs.o 00:05:54.089 CC test/nvme/reserve/reserve.o 00:05:54.350 LINK startup 00:05:54.350 CC examples/nvme/arbitration/arbitration.o 00:05:54.350 CXX test/cpp_headers/fsdev.o 00:05:54.350 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:54.350 CC examples/accel/perf/accel_perf.o 00:05:54.350 LINK mkfs 00:05:54.350 LINK reserve 00:05:54.350 LINK nvme_manage 00:05:54.350 CXX test/cpp_headers/fsdev_module.o 00:05:54.350 CC examples/nvme/hotplug/hotplug.o 00:05:54.609 LINK vhost_fuzz 00:05:54.609 LINK hello_fsdev 00:05:54.609 CXX test/cpp_headers/ftl.o 00:05:54.609 LINK arbitration 00:05:54.609 CC test/nvme/simple_copy/simple_copy.o 00:05:54.609 CC test/nvme/connect_stress/connect_stress.o 00:05:54.609 CC test/nvme/boot_partition/boot_partition.o 00:05:54.609 LINK hotplug 00:05:54.869 CC test/nvme/compliance/nvme_compliance.o 00:05:54.869 CXX test/cpp_headers/fuse_dispatcher.o 00:05:54.869 LINK dif 00:05:54.869 LINK boot_partition 00:05:54.869 LINK connect_stress 00:05:54.869 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:54.869 LINK simple_copy 00:05:54.869 LINK accel_perf 00:05:54.869 CC examples/nvme/abort/abort.o 00:05:54.869 CXX test/cpp_headers/gpt_spec.o 00:05:55.129 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:55.129 CC test/nvme/fused_ordering/fused_ordering.o 00:05:55.129 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:55.129 LINK cmb_copy 00:05:55.129 CC test/nvme/fdp/fdp.o 00:05:55.129 LINK nvme_compliance 00:05:55.129 CXX test/cpp_headers/hexlify.o 00:05:55.129 CC test/nvme/cuse/cuse.o 00:05:55.129 LINK pmr_persistence 00:05:55.389 LINK abort 00:05:55.389 CXX test/cpp_headers/histogram_data.o 00:05:55.389 LINK fused_ordering 00:05:55.389 LINK doorbell_aers 00:05:55.389 CC examples/blob/hello_world/hello_blob.o 00:05:55.389 CC examples/blob/cli/blobcli.o 00:05:55.389 CXX test/cpp_headers/idxd.o 00:05:55.389 CXX test/cpp_headers/idxd_spec.o 00:05:55.389 LINK fdp 00:05:55.648 CC examples/bdev/hello_world/hello_bdev.o 00:05:55.648 CC examples/bdev/bdevperf/bdevperf.o 00:05:55.648 LINK hello_blob 00:05:55.648 CXX test/cpp_headers/init.o 00:05:55.648 CXX test/cpp_headers/ioat.o 00:05:55.648 CC test/bdev/bdevio/bdevio.o 00:05:55.648 CXX test/cpp_headers/ioat_spec.o 00:05:55.648 CC test/lvol/esnap/esnap.o 00:05:55.907 LINK hello_bdev 00:05:55.907 CXX test/cpp_headers/iscsi_spec.o 00:05:55.907 CXX test/cpp_headers/json.o 00:05:55.907 CXX test/cpp_headers/jsonrpc.o 00:05:55.907 CXX test/cpp_headers/keyring.o 00:05:55.907 LINK blobcli 00:05:55.907 CXX test/cpp_headers/keyring_module.o 00:05:55.907 CXX test/cpp_headers/likely.o 00:05:55.907 CXX test/cpp_headers/log.o 00:05:55.907 CXX test/cpp_headers/lvol.o 00:05:56.167 CXX test/cpp_headers/md5.o 00:05:56.167 LINK bdevio 00:05:56.167 CXX test/cpp_headers/memory.o 00:05:56.167 CXX test/cpp_headers/mmio.o 00:05:56.167 CXX test/cpp_headers/nbd.o 00:05:56.167 CXX test/cpp_headers/net.o 00:05:56.167 CXX test/cpp_headers/notify.o 00:05:56.167 CXX test/cpp_headers/nvme.o 00:05:56.426 CXX test/cpp_headers/nvme_intel.o 00:05:56.426 CXX test/cpp_headers/nvme_ocssd.o 00:05:56.426 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:56.426 CXX test/cpp_headers/nvme_spec.o 00:05:56.426 CXX test/cpp_headers/nvme_zns.o 00:05:56.426 CXX test/cpp_headers/nvmf_cmd.o 00:05:56.426 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:56.426 CXX test/cpp_headers/nvmf.o 00:05:56.426 CXX test/cpp_headers/nvmf_spec.o 00:05:56.426 LINK bdevperf 00:05:56.426 CXX test/cpp_headers/nvmf_transport.o 00:05:56.426 CXX test/cpp_headers/opal.o 00:05:56.427 CXX test/cpp_headers/opal_spec.o 00:05:56.685 CXX test/cpp_headers/pci_ids.o 00:05:56.685 CXX test/cpp_headers/pipe.o 00:05:56.685 LINK cuse 00:05:56.685 CXX test/cpp_headers/queue.o 00:05:56.685 CXX test/cpp_headers/reduce.o 00:05:56.685 CXX test/cpp_headers/rpc.o 00:05:56.685 CXX test/cpp_headers/scheduler.o 00:05:56.685 CXX test/cpp_headers/scsi.o 00:05:56.685 CXX test/cpp_headers/scsi_spec.o 00:05:56.685 CXX test/cpp_headers/sock.o 00:05:56.685 CXX test/cpp_headers/stdinc.o 00:05:56.944 CXX test/cpp_headers/string.o 00:05:56.944 CXX test/cpp_headers/thread.o 00:05:56.944 CXX test/cpp_headers/trace.o 00:05:56.944 CXX test/cpp_headers/trace_parser.o 00:05:56.944 CXX test/cpp_headers/tree.o 00:05:56.944 CXX test/cpp_headers/ublk.o 00:05:56.944 CXX test/cpp_headers/uuid.o 00:05:56.944 CXX test/cpp_headers/util.o 00:05:56.944 CXX test/cpp_headers/version.o 00:05:56.944 CXX test/cpp_headers/vfio_user_pci.o 00:05:56.944 CXX test/cpp_headers/vfio_user_spec.o 00:05:56.944 CC examples/nvmf/nvmf/nvmf.o 00:05:57.203 CXX test/cpp_headers/vhost.o 00:05:57.203 CXX test/cpp_headers/vmd.o 00:05:57.203 CXX test/cpp_headers/xor.o 00:05:57.203 CXX test/cpp_headers/zipf.o 00:05:57.460 LINK nvmf 00:06:01.659 LINK esnap 00:06:01.920 00:06:01.920 real 1m33.817s 00:06:01.920 user 7m11.219s 00:06:01.920 sys 1m16.735s 00:06:01.920 08:27:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:01.920 08:27:23 make -- common/autotest_common.sh@10 -- $ set +x 00:06:01.920 ************************************ 00:06:01.920 END TEST make 00:06:01.920 ************************************ 00:06:01.920 08:27:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:01.920 08:27:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:01.920 08:27:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:01.920 08:27:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.920 08:27:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:01.920 08:27:23 -- pm/common@44 -- $ pid=6248 00:06:01.920 08:27:23 -- pm/common@50 -- $ kill -TERM 6248 00:06:01.920 08:27:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.920 08:27:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:01.920 08:27:23 -- pm/common@44 -- $ pid=6250 00:06:01.920 08:27:23 -- pm/common@50 -- $ kill -TERM 6250 00:06:01.921 08:27:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:01.921 08:27:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:02.192 08:27:23 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.192 08:27:23 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.192 08:27:23 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.192 08:27:23 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.192 08:27:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.192 08:27:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.192 08:27:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.192 08:27:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.192 08:27:23 -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.192 08:27:23 -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.192 08:27:23 -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.192 08:27:23 -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.192 08:27:23 -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.192 08:27:23 -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.192 08:27:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.192 08:27:23 -- scripts/common.sh@344 -- # case "$op" in 00:06:02.192 08:27:23 -- scripts/common.sh@345 -- # : 1 00:06:02.192 08:27:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.192 08:27:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.192 08:27:23 -- scripts/common.sh@365 -- # decimal 1 00:06:02.192 08:27:23 -- scripts/common.sh@353 -- # local d=1 00:06:02.192 08:27:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.192 08:27:23 -- scripts/common.sh@355 -- # echo 1 00:06:02.192 08:27:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.192 08:27:23 -- scripts/common.sh@366 -- # decimal 2 00:06:02.192 08:27:23 -- scripts/common.sh@353 -- # local d=2 00:06:02.192 08:27:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.192 08:27:23 -- scripts/common.sh@355 -- # echo 2 00:06:02.192 08:27:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.192 08:27:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.192 08:27:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.192 08:27:23 -- scripts/common.sh@368 -- # return 0 00:06:02.192 08:27:23 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.192 08:27:23 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.192 --rc genhtml_branch_coverage=1 00:06:02.192 --rc genhtml_function_coverage=1 00:06:02.192 --rc genhtml_legend=1 00:06:02.192 --rc geninfo_all_blocks=1 00:06:02.192 --rc geninfo_unexecuted_blocks=1 00:06:02.192 00:06:02.192 ' 00:06:02.192 08:27:23 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.192 --rc genhtml_branch_coverage=1 00:06:02.192 --rc genhtml_function_coverage=1 00:06:02.192 --rc genhtml_legend=1 00:06:02.192 --rc geninfo_all_blocks=1 00:06:02.192 --rc geninfo_unexecuted_blocks=1 00:06:02.192 00:06:02.192 ' 00:06:02.192 08:27:23 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.192 --rc genhtml_branch_coverage=1 00:06:02.192 --rc genhtml_function_coverage=1 00:06:02.192 --rc genhtml_legend=1 00:06:02.192 --rc geninfo_all_blocks=1 00:06:02.192 --rc geninfo_unexecuted_blocks=1 00:06:02.192 00:06:02.192 ' 00:06:02.192 08:27:23 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.192 --rc genhtml_branch_coverage=1 00:06:02.192 --rc genhtml_function_coverage=1 00:06:02.192 --rc genhtml_legend=1 00:06:02.192 --rc geninfo_all_blocks=1 00:06:02.192 --rc geninfo_unexecuted_blocks=1 00:06:02.192 00:06:02.192 ' 00:06:02.192 08:27:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.192 08:27:23 -- nvmf/common.sh@7 -- # uname -s 00:06:02.192 08:27:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.192 08:27:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.192 08:27:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.192 08:27:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.192 08:27:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.192 08:27:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.192 08:27:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.192 08:27:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.192 08:27:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.192 08:27:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.192 08:27:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bba91b1-3e47-4a92-b42d-fd1dd4e0f3d4 00:06:02.192 08:27:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=7bba91b1-3e47-4a92-b42d-fd1dd4e0f3d4 00:06:02.192 08:27:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.192 08:27:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.193 08:27:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.193 08:27:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.193 08:27:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.193 08:27:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.193 08:27:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.193 08:27:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.193 08:27:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.193 08:27:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 08:27:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 08:27:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 08:27:24 -- paths/export.sh@5 -- # export PATH 00:06:02.193 08:27:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 08:27:24 -- nvmf/common.sh@51 -- # : 0 00:06:02.193 08:27:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.193 08:27:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.193 08:27:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.193 08:27:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.193 08:27:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.193 08:27:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.193 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.193 08:27:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.193 08:27:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.193 08:27:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.193 08:27:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:02.193 08:27:24 -- spdk/autotest.sh@32 -- # uname -s 00:06:02.193 08:27:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:02.193 08:27:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:02.193 08:27:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:02.193 08:27:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:02.193 08:27:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:02.193 08:27:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:02.453 08:27:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:02.453 08:27:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:02.453 08:27:24 -- spdk/autotest.sh@48 -- # udevadm_pid=67277 00:06:02.453 08:27:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:02.453 08:27:24 -- pm/common@17 -- # local monitor 00:06:02.453 08:27:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:02.453 08:27:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.453 08:27:24 -- pm/common@21 -- # date +%s 00:06:02.453 08:27:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.453 08:27:24 -- pm/common@25 -- # sleep 1 00:06:02.453 08:27:24 -- pm/common@21 -- # date +%s 00:06:02.453 08:27:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732004844 00:06:02.453 08:27:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732004844 00:06:02.453 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732004844_collect-cpu-load.pm.log 00:06:02.453 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732004844_collect-vmstat.pm.log 00:06:03.391 08:27:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:03.391 08:27:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:03.391 08:27:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.391 08:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.391 08:27:25 -- spdk/autotest.sh@59 -- # create_test_list 00:06:03.391 08:27:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:03.391 08:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.391 08:27:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:03.391 08:27:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:03.391 08:27:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:03.391 08:27:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:03.391 08:27:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:03.391 08:27:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:03.391 08:27:25 -- common/autotest_common.sh@1457 -- # uname 00:06:03.391 08:27:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:03.391 08:27:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:03.391 08:27:25 -- common/autotest_common.sh@1477 -- # uname 00:06:03.391 08:27:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:03.391 08:27:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:03.391 08:27:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:03.391 lcov: LCOV version 1.15 00:06:03.391 08:27:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:18.389 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:18.389 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:33.268 08:27:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:33.268 08:27:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.268 08:27:54 -- common/autotest_common.sh@10 -- # set +x 00:06:33.268 08:27:54 -- spdk/autotest.sh@78 -- # rm -f 00:06:33.268 08:27:54 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:33.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.838 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:33.838 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:33.838 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:33.838 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:33.838 08:27:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:33.838 08:27:55 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:33.838 08:27:55 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:33.838 08:27:55 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.838 08:27:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:33.838 08:27:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:33.838 08:27:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.838 08:27:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:33.838 08:27:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:33.838 08:27:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.098 08:27:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:34.098 08:27:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:34.098 08:27:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:34.098 No valid GPT data, bailing 00:06:34.098 08:27:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:34.098 08:27:55 -- scripts/common.sh@394 -- # pt= 00:06:34.098 08:27:55 -- scripts/common.sh@395 -- # return 1 00:06:34.098 08:27:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:34.098 1+0 records in 00:06:34.098 1+0 records out 00:06:34.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171805 s, 61.0 MB/s 00:06:34.098 08:27:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.098 08:27:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.098 08:27:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:34.098 08:27:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:34.098 08:27:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:34.098 No valid GPT data, bailing 00:06:34.098 08:27:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:34.098 08:27:55 -- scripts/common.sh@394 -- # pt= 00:06:34.098 08:27:55 -- scripts/common.sh@395 -- # return 1 00:06:34.098 08:27:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:34.098 1+0 records in 00:06:34.098 1+0 records out 00:06:34.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622784 s, 168 MB/s 00:06:34.098 08:27:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.098 08:27:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.098 08:27:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:34.098 08:27:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:34.098 08:27:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:34.098 No valid GPT data, bailing 00:06:34.098 08:27:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:34.098 08:27:55 -- scripts/common.sh@394 -- # pt= 00:06:34.098 08:27:55 -- scripts/common.sh@395 -- # return 1 00:06:34.098 08:27:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:34.098 1+0 records in 00:06:34.098 1+0 records out 00:06:34.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534333 s, 196 MB/s 00:06:34.098 08:27:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.098 08:27:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.098 08:27:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:34.098 08:27:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:34.098 08:27:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:34.359 No valid GPT data, bailing 00:06:34.359 08:27:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:34.359 08:27:56 -- scripts/common.sh@394 -- # pt= 00:06:34.359 08:27:56 -- scripts/common.sh@395 -- # return 1 00:06:34.359 08:27:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:34.359 1+0 records in 00:06:34.359 1+0 records out 00:06:34.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636389 s, 165 MB/s 00:06:34.359 08:27:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.359 08:27:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.359 08:27:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:34.359 08:27:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:34.359 08:27:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:34.359 No valid GPT data, bailing 00:06:34.359 08:27:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:34.359 08:27:56 -- scripts/common.sh@394 -- # pt= 00:06:34.359 08:27:56 -- scripts/common.sh@395 -- # return 1 00:06:34.359 08:27:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:34.359 1+0 records in 00:06:34.359 1+0 records out 00:06:34.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00699464 s, 150 MB/s 00:06:34.359 08:27:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:34.359 08:27:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:34.359 08:27:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:34.359 08:27:56 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:34.359 08:27:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:34.359 No valid GPT data, bailing 00:06:34.359 08:27:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:34.359 08:27:56 -- scripts/common.sh@394 -- # pt= 00:06:34.359 08:27:56 -- scripts/common.sh@395 -- # return 1 00:06:34.359 08:27:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:34.359 1+0 records in 00:06:34.359 1+0 records out 00:06:34.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504377 s, 208 MB/s 00:06:34.359 08:27:56 -- spdk/autotest.sh@105 -- # sync 00:06:34.618 08:27:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:34.618 08:27:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:34.618 08:27:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:37.910 08:27:59 -- spdk/autotest.sh@111 -- # uname -s 00:06:37.910 08:27:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:37.910 08:27:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:37.910 08:27:59 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:38.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:38.427 Hugepages 00:06:38.427 node hugesize free / total 00:06:38.427 node0 1048576kB 0 / 0 00:06:38.427 node0 2048kB 0 / 0 00:06:38.427 00:06:38.427 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:38.696 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:38.696 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:38.967 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:06:38.967 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:38.967 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:38.967 08:28:00 -- spdk/autotest.sh@117 -- # uname -s 00:06:38.967 08:28:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:38.967 08:28:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:38.967 08:28:00 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:39.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.470 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.470 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.470 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.470 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.470 08:28:02 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:41.875 08:28:03 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:41.875 08:28:03 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:41.876 08:28:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:41.876 08:28:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:41.876 08:28:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:41.876 08:28:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:41.876 08:28:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:41.876 08:28:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:41.876 08:28:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:41.876 08:28:03 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:41.876 08:28:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:41.876 08:28:03 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:42.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.426 Waiting for block devices as requested 00:06:42.426 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.426 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.684 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.684 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.035 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:48.035 08:28:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:48.035 08:28:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:48.035 08:28:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:48.035 08:28:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1543 -- # continue 00:06:48.035 08:28:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:48.035 08:28:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1543 -- # continue 00:06:48.035 08:28:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:48.035 08:28:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1543 -- # continue 00:06:48.035 08:28:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:48.035 08:28:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:48.035 08:28:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:48.035 08:28:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:48.035 08:28:09 -- common/autotest_common.sh@1543 -- # continue 00:06:48.035 08:28:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:48.035 08:28:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.035 08:28:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.035 08:28:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:48.035 08:28:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.035 08:28:09 -- common/autotest_common.sh@10 -- # set +x 00:06:48.035 08:28:09 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.533 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.533 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.533 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.533 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.533 08:28:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:49.533 08:28:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.533 08:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.533 08:28:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:49.533 08:28:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:49.533 08:28:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:49.533 08:28:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:49.533 08:28:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:49.533 08:28:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:49.533 08:28:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:49.533 08:28:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:49.533 08:28:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:49.533 08:28:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:49.533 08:28:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:49.533 08:28:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:49.533 08:28:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:49.791 08:28:11 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:49.791 08:28:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:49.791 08:28:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:49.791 08:28:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:49.791 08:28:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:49.791 08:28:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:49.791 08:28:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:49.791 08:28:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:49.791 08:28:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:49.791 08:28:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:49.791 08:28:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:49.791 08:28:11 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:49.791 08:28:11 -- common/autotest_common.sh@1572 -- # return 0 00:06:49.791 08:28:11 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:49.791 08:28:11 -- common/autotest_common.sh@1580 -- # return 0 00:06:49.791 08:28:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:49.791 08:28:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:49.791 08:28:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:49.791 08:28:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:49.791 08:28:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:49.791 08:28:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.791 08:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.791 08:28:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:49.791 08:28:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:49.791 08:28:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.791 08:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.791 08:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.791 ************************************ 00:06:49.791 START TEST env 00:06:49.791 ************************************ 00:06:49.791 08:28:11 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:49.791 * Looking for test storage... 00:06:49.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:49.791 08:28:11 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.791 08:28:11 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.791 08:28:11 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.050 08:28:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.050 08:28:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.050 08:28:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.050 08:28:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.050 08:28:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.050 08:28:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.050 08:28:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.050 08:28:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.050 08:28:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.050 08:28:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.050 08:28:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.050 08:28:11 env -- scripts/common.sh@344 -- # case "$op" in 00:06:50.050 08:28:11 env -- scripts/common.sh@345 -- # : 1 00:06:50.050 08:28:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.050 08:28:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.050 08:28:11 env -- scripts/common.sh@365 -- # decimal 1 00:06:50.050 08:28:11 env -- scripts/common.sh@353 -- # local d=1 00:06:50.050 08:28:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.050 08:28:11 env -- scripts/common.sh@355 -- # echo 1 00:06:50.050 08:28:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.050 08:28:11 env -- scripts/common.sh@366 -- # decimal 2 00:06:50.050 08:28:11 env -- scripts/common.sh@353 -- # local d=2 00:06:50.050 08:28:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.050 08:28:11 env -- scripts/common.sh@355 -- # echo 2 00:06:50.050 08:28:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.050 08:28:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.050 08:28:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.050 08:28:11 env -- scripts/common.sh@368 -- # return 0 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.050 --rc genhtml_branch_coverage=1 00:06:50.050 --rc genhtml_function_coverage=1 00:06:50.050 --rc genhtml_legend=1 00:06:50.050 --rc geninfo_all_blocks=1 00:06:50.050 --rc geninfo_unexecuted_blocks=1 00:06:50.050 00:06:50.050 ' 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.050 --rc genhtml_branch_coverage=1 00:06:50.050 --rc genhtml_function_coverage=1 00:06:50.050 --rc genhtml_legend=1 00:06:50.050 --rc geninfo_all_blocks=1 00:06:50.050 --rc geninfo_unexecuted_blocks=1 00:06:50.050 00:06:50.050 ' 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.050 --rc genhtml_branch_coverage=1 00:06:50.050 --rc genhtml_function_coverage=1 00:06:50.050 --rc genhtml_legend=1 00:06:50.050 --rc geninfo_all_blocks=1 00:06:50.050 --rc geninfo_unexecuted_blocks=1 00:06:50.050 00:06:50.050 ' 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.050 --rc genhtml_branch_coverage=1 00:06:50.050 --rc genhtml_function_coverage=1 00:06:50.050 --rc genhtml_legend=1 00:06:50.050 --rc geninfo_all_blocks=1 00:06:50.050 --rc geninfo_unexecuted_blocks=1 00:06:50.050 00:06:50.050 ' 00:06:50.050 08:28:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.050 08:28:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.050 08:28:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:50.050 ************************************ 00:06:50.050 START TEST env_memory 00:06:50.050 ************************************ 00:06:50.050 08:28:11 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:50.050 00:06:50.050 00:06:50.050 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.050 http://cunit.sourceforge.net/ 00:06:50.050 00:06:50.050 00:06:50.050 Suite: memory 00:06:50.050 Test: alloc and free memory map ...[2024-11-19 08:28:11.862565] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:50.050 passed 00:06:50.050 Test: mem map translation ...[2024-11-19 08:28:11.929031] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:50.050 [2024-11-19 08:28:11.929173] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:50.050 [2024-11-19 08:28:11.929311] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:50.050 [2024-11-19 08:28:11.929361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:50.308 passed 00:06:50.308 Test: mem map registration ...[2024-11-19 08:28:12.002805] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:50.308 [2024-11-19 08:28:12.002913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:50.308 passed 00:06:50.308 Test: mem map adjacent registrations ...passed 00:06:50.308 00:06:50.308 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.308 suites 1 1 n/a 0 0 00:06:50.308 tests 4 4 4 0 0 00:06:50.308 asserts 152 152 152 0 n/a 00:06:50.308 00:06:50.308 Elapsed time = 0.274 seconds 00:06:50.308 00:06:50.308 real 0m0.326s 00:06:50.308 user 0m0.286s 00:06:50.308 sys 0m0.029s 00:06:50.308 08:28:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.308 08:28:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:50.308 ************************************ 00:06:50.308 END TEST env_memory 00:06:50.308 ************************************ 00:06:50.308 08:28:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:50.308 08:28:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.308 08:28:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.308 08:28:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:50.308 ************************************ 00:06:50.308 START TEST env_vtophys 00:06:50.308 ************************************ 00:06:50.308 08:28:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:50.566 EAL: lib.eal log level changed from notice to debug 00:06:50.566 EAL: Detected lcore 0 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 1 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 2 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 3 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 4 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 5 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 6 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 7 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 8 as core 0 on socket 0 00:06:50.566 EAL: Detected lcore 9 as core 0 on socket 0 00:06:50.566 EAL: Maximum logical cores by configuration: 128 00:06:50.566 EAL: Detected CPU lcores: 10 00:06:50.566 EAL: Detected NUMA nodes: 1 00:06:50.566 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:50.566 EAL: Detected shared linkage of DPDK 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:50.566 EAL: Registered [vdev] bus. 00:06:50.566 EAL: bus.vdev log level changed from disabled to notice 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:50.566 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:50.566 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:50.566 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:50.566 EAL: No shared files mode enabled, IPC will be disabled 00:06:50.566 EAL: No shared files mode enabled, IPC is disabled 00:06:50.566 EAL: Selected IOVA mode 'PA' 00:06:50.566 EAL: Probing VFIO support... 00:06:50.566 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:50.566 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:50.566 EAL: Ask a virtual area of 0x2e000 bytes 00:06:50.566 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:50.566 EAL: Setting up physically contiguous memory... 00:06:50.566 EAL: Setting maximum number of open files to 524288 00:06:50.566 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:50.566 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:50.566 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.566 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:50.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.566 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.566 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:50.566 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:50.566 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.566 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:50.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.566 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.566 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:50.566 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:50.566 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.566 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:50.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.566 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.566 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:50.566 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:50.566 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.566 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:50.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.566 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.566 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:50.566 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:50.566 EAL: Hugepages will be freed exactly as allocated. 00:06:50.566 EAL: No shared files mode enabled, IPC is disabled 00:06:50.566 EAL: No shared files mode enabled, IPC is disabled 00:06:50.566 EAL: TSC frequency is ~2290000 KHz 00:06:50.566 EAL: Main lcore 0 is ready (tid=7fac38675a40;cpuset=[0]) 00:06:50.566 EAL: Trying to obtain current memory policy. 00:06:50.566 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.566 EAL: Restoring previous memory policy: 0 00:06:50.566 EAL: request: mp_malloc_sync 00:06:50.566 EAL: No shared files mode enabled, IPC is disabled 00:06:50.566 EAL: Heap on socket 0 was expanded by 2MB 00:06:50.566 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:50.566 EAL: No shared files mode enabled, IPC is disabled 00:06:50.566 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:50.566 EAL: Mem event callback 'spdk:(nil)' registered 00:06:50.566 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:50.566 00:06:50.566 00:06:50.566 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.566 http://cunit.sourceforge.net/ 00:06:50.566 00:06:50.566 00:06:50.566 Suite: components_suite 00:06:51.132 Test: vtophys_malloc_test ...passed 00:06:51.132 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:51.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.132 EAL: Restoring previous memory policy: 4 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.132 EAL: No shared files mode enabled, IPC is disabled 00:06:51.132 EAL: Heap on socket 0 was expanded by 4MB 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.132 EAL: No shared files mode enabled, IPC is disabled 00:06:51.132 EAL: Heap on socket 0 was shrunk by 4MB 00:06:51.132 EAL: Trying to obtain current memory policy. 00:06:51.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.132 EAL: Restoring previous memory policy: 4 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.132 EAL: No shared files mode enabled, IPC is disabled 00:06:51.132 EAL: Heap on socket 0 was expanded by 6MB 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.132 EAL: No shared files mode enabled, IPC is disabled 00:06:51.132 EAL: Heap on socket 0 was shrunk by 6MB 00:06:51.132 EAL: Trying to obtain current memory policy. 00:06:51.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.132 EAL: Restoring previous memory policy: 4 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.132 EAL: No shared files mode enabled, IPC is disabled 00:06:51.132 EAL: Heap on socket 0 was expanded by 10MB 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.132 EAL: No shared files mode enabled, IPC is disabled 00:06:51.132 EAL: Heap on socket 0 was shrunk by 10MB 00:06:51.132 EAL: Trying to obtain current memory policy. 00:06:51.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.132 EAL: Restoring previous memory policy: 4 00:06:51.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.132 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was expanded by 18MB 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was shrunk by 18MB 00:06:51.133 EAL: Trying to obtain current memory policy. 00:06:51.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.133 EAL: Restoring previous memory policy: 4 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was expanded by 34MB 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was shrunk by 34MB 00:06:51.133 EAL: Trying to obtain current memory policy. 00:06:51.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.133 EAL: Restoring previous memory policy: 4 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was expanded by 66MB 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was shrunk by 66MB 00:06:51.133 EAL: Trying to obtain current memory policy. 00:06:51.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.133 EAL: Restoring previous memory policy: 4 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was expanded by 130MB 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was shrunk by 130MB 00:06:51.133 EAL: Trying to obtain current memory policy. 00:06:51.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.133 EAL: Restoring previous memory policy: 4 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was expanded by 258MB 00:06:51.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.133 EAL: request: mp_malloc_sync 00:06:51.133 EAL: No shared files mode enabled, IPC is disabled 00:06:51.133 EAL: Heap on socket 0 was shrunk by 258MB 00:06:51.133 EAL: Trying to obtain current memory policy. 00:06:51.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.390 EAL: Restoring previous memory policy: 4 00:06:51.390 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.390 EAL: request: mp_malloc_sync 00:06:51.390 EAL: No shared files mode enabled, IPC is disabled 00:06:51.390 EAL: Heap on socket 0 was expanded by 514MB 00:06:51.390 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.390 EAL: request: mp_malloc_sync 00:06:51.390 EAL: No shared files mode enabled, IPC is disabled 00:06:51.390 EAL: Heap on socket 0 was shrunk by 514MB 00:06:51.390 EAL: Trying to obtain current memory policy. 00:06:51.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.647 EAL: Restoring previous memory policy: 4 00:06:51.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.647 EAL: request: mp_malloc_sync 00:06:51.647 EAL: No shared files mode enabled, IPC is disabled 00:06:51.647 EAL: Heap on socket 0 was expanded by 1026MB 00:06:51.905 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.162 passed 00:06:52.162 00:06:52.162 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.162 suites 1 1 n/a 0 0 00:06:52.162 tests 2 2 2 0 0 00:06:52.162 asserts 5603 5603 5603 0 n/a 00:06:52.162 00:06:52.162 Elapsed time = 1.407 seconds 00:06:52.162 EAL: request: mp_malloc_sync 00:06:52.162 EAL: No shared files mode enabled, IPC is disabled 00:06:52.162 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:52.162 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.162 EAL: request: mp_malloc_sync 00:06:52.162 EAL: No shared files mode enabled, IPC is disabled 00:06:52.162 EAL: Heap on socket 0 was shrunk by 2MB 00:06:52.162 EAL: No shared files mode enabled, IPC is disabled 00:06:52.162 EAL: No shared files mode enabled, IPC is disabled 00:06:52.162 EAL: No shared files mode enabled, IPC is disabled 00:06:52.162 00:06:52.162 real 0m1.659s 00:06:52.162 user 0m0.800s 00:06:52.162 sys 0m0.724s 00:06:52.162 08:28:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.162 ************************************ 00:06:52.162 END TEST env_vtophys 00:06:52.162 ************************************ 00:06:52.162 08:28:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:52.162 08:28:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:52.162 08:28:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.162 08:28:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.162 08:28:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.162 ************************************ 00:06:52.162 START TEST env_pci 00:06:52.162 ************************************ 00:06:52.162 08:28:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:52.162 00:06:52.162 00:06:52.162 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.162 http://cunit.sourceforge.net/ 00:06:52.162 00:06:52.162 00:06:52.162 Suite: pci 00:06:52.162 Test: pci_hook ...[2024-11-19 08:28:13.916545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70049 has claimed it 00:06:52.162 passed 00:06:52.162 00:06:52.162 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.162 suites 1 1 n/a 0 0 00:06:52.163 tests 1 1 1 0 0 00:06:52.163 asserts 25 25 25 0 n/a 00:06:52.163 00:06:52.163 Elapsed time = 0.005 seconds 00:06:52.163 EAL: Cannot find device (10000:00:01.0) 00:06:52.163 EAL: Failed to attach device on primary process 00:06:52.163 00:06:52.163 real 0m0.075s 00:06:52.163 user 0m0.036s 00:06:52.163 sys 0m0.038s 00:06:52.163 08:28:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.163 08:28:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:52.163 ************************************ 00:06:52.163 END TEST env_pci 00:06:52.163 ************************************ 00:06:52.163 08:28:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:52.163 08:28:14 env -- env/env.sh@15 -- # uname 00:06:52.163 08:28:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:52.163 08:28:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:52.163 08:28:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:52.163 08:28:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:52.163 08:28:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.163 08:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.163 ************************************ 00:06:52.163 START TEST env_dpdk_post_init 00:06:52.163 ************************************ 00:06:52.163 08:28:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:52.420 EAL: Detected CPU lcores: 10 00:06:52.420 EAL: Detected NUMA nodes: 1 00:06:52.420 EAL: Detected shared linkage of DPDK 00:06:52.420 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:52.420 EAL: Selected IOVA mode 'PA' 00:06:52.420 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:52.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:52.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:52.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:52.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:52.420 Starting DPDK initialization... 00:06:52.420 Starting SPDK post initialization... 00:06:52.420 SPDK NVMe probe 00:06:52.420 Attaching to 0000:00:10.0 00:06:52.420 Attaching to 0000:00:11.0 00:06:52.420 Attaching to 0000:00:12.0 00:06:52.420 Attaching to 0000:00:13.0 00:06:52.420 Attached to 0000:00:10.0 00:06:52.420 Attached to 0000:00:11.0 00:06:52.420 Attached to 0000:00:13.0 00:06:52.420 Attached to 0000:00:12.0 00:06:52.420 Cleaning up... 00:06:52.420 00:06:52.420 real 0m0.253s 00:06:52.420 user 0m0.074s 00:06:52.420 sys 0m0.084s 00:06:52.420 08:28:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.420 08:28:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:52.420 ************************************ 00:06:52.420 END TEST env_dpdk_post_init 00:06:52.420 ************************************ 00:06:52.678 08:28:14 env -- env/env.sh@26 -- # uname 00:06:52.678 08:28:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:52.678 08:28:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:52.678 08:28:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.678 08:28:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.678 08:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.678 ************************************ 00:06:52.678 START TEST env_mem_callbacks 00:06:52.678 ************************************ 00:06:52.678 08:28:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:52.678 EAL: Detected CPU lcores: 10 00:06:52.678 EAL: Detected NUMA nodes: 1 00:06:52.678 EAL: Detected shared linkage of DPDK 00:06:52.678 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:52.678 EAL: Selected IOVA mode 'PA' 00:06:52.678 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:52.678 00:06:52.678 00:06:52.678 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.678 http://cunit.sourceforge.net/ 00:06:52.678 00:06:52.678 00:06:52.678 Suite: memory 00:06:52.678 Test: test ... 00:06:52.678 register 0x200000200000 2097152 00:06:52.678 malloc 3145728 00:06:52.678 register 0x200000400000 4194304 00:06:52.678 buf 0x200000500000 len 3145728 PASSED 00:06:52.678 malloc 64 00:06:52.678 buf 0x2000004fff40 len 64 PASSED 00:06:52.678 malloc 4194304 00:06:52.678 register 0x200000800000 6291456 00:06:52.678 buf 0x200000a00000 len 4194304 PASSED 00:06:52.678 free 0x200000500000 3145728 00:06:52.678 free 0x2000004fff40 64 00:06:52.678 unregister 0x200000400000 4194304 PASSED 00:06:52.678 free 0x200000a00000 4194304 00:06:52.678 unregister 0x200000800000 6291456 PASSED 00:06:52.678 malloc 8388608 00:06:52.678 register 0x200000400000 10485760 00:06:52.678 buf 0x200000600000 len 8388608 PASSED 00:06:52.678 free 0x200000600000 8388608 00:06:52.678 unregister 0x200000400000 10485760 PASSED 00:06:52.678 passed 00:06:52.678 00:06:52.678 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.678 suites 1 1 n/a 0 0 00:06:52.678 tests 1 1 1 0 0 00:06:52.678 asserts 15 15 15 0 n/a 00:06:52.678 00:06:52.678 Elapsed time = 0.011 seconds 00:06:52.678 00:06:52.678 real 0m0.181s 00:06:52.678 user 0m0.025s 00:06:52.678 sys 0m0.055s 00:06:52.678 08:28:14 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.678 08:28:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:52.678 ************************************ 00:06:52.678 END TEST env_mem_callbacks 00:06:52.678 ************************************ 00:06:52.936 00:06:52.936 real 0m3.030s 00:06:52.936 user 0m1.454s 00:06:52.936 sys 0m1.255s 00:06:52.936 08:28:14 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.936 08:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 ************************************ 00:06:52.936 END TEST env 00:06:52.936 ************************************ 00:06:52.936 08:28:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:52.936 08:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.936 08:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.936 08:28:14 -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 ************************************ 00:06:52.936 START TEST rpc 00:06:52.936 ************************************ 00:06:52.936 08:28:14 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:52.936 * Looking for test storage... 00:06:52.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:52.936 08:28:14 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.936 08:28:14 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.936 08:28:14 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.194 08:28:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.194 08:28:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.194 08:28:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.194 08:28:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.194 08:28:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.194 08:28:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:53.194 08:28:14 rpc -- scripts/common.sh@345 -- # : 1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.194 08:28:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.194 08:28:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@353 -- # local d=1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.194 08:28:14 rpc -- scripts/common.sh@355 -- # echo 1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.194 08:28:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@353 -- # local d=2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.194 08:28:14 rpc -- scripts/common.sh@355 -- # echo 2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.194 08:28:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.194 08:28:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.194 08:28:14 rpc -- scripts/common.sh@368 -- # return 0 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.194 --rc genhtml_branch_coverage=1 00:06:53.194 --rc genhtml_function_coverage=1 00:06:53.194 --rc genhtml_legend=1 00:06:53.194 --rc geninfo_all_blocks=1 00:06:53.194 --rc geninfo_unexecuted_blocks=1 00:06:53.194 00:06:53.194 ' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.194 --rc genhtml_branch_coverage=1 00:06:53.194 --rc genhtml_function_coverage=1 00:06:53.194 --rc genhtml_legend=1 00:06:53.194 --rc geninfo_all_blocks=1 00:06:53.194 --rc geninfo_unexecuted_blocks=1 00:06:53.194 00:06:53.194 ' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.194 --rc genhtml_branch_coverage=1 00:06:53.194 --rc genhtml_function_coverage=1 00:06:53.194 --rc genhtml_legend=1 00:06:53.194 --rc geninfo_all_blocks=1 00:06:53.194 --rc geninfo_unexecuted_blocks=1 00:06:53.194 00:06:53.194 ' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.194 --rc genhtml_branch_coverage=1 00:06:53.194 --rc genhtml_function_coverage=1 00:06:53.194 --rc genhtml_legend=1 00:06:53.194 --rc geninfo_all_blocks=1 00:06:53.194 --rc geninfo_unexecuted_blocks=1 00:06:53.194 00:06:53.194 ' 00:06:53.194 08:28:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70176 00:06:53.194 08:28:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:53.194 08:28:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.194 08:28:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70176 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 70176 ']' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.194 08:28:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.194 [2024-11-19 08:28:14.981808] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:53.195 [2024-11-19 08:28:14.982333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70176 ] 00:06:53.453 [2024-11-19 08:28:15.139923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.453 [2024-11-19 08:28:15.168976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:53.453 [2024-11-19 08:28:15.169061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70176' to capture a snapshot of events at runtime. 00:06:53.453 [2024-11-19 08:28:15.169074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.453 [2024-11-19 08:28:15.169083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.453 [2024-11-19 08:28:15.169101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70176 for offline analysis/debug. 00:06:53.453 [2024-11-19 08:28:15.169529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.019 08:28:15 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.019 08:28:15 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.019 08:28:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:54.019 08:28:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:54.019 08:28:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:54.019 08:28:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:54.019 08:28:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.019 08:28:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.019 08:28:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.019 ************************************ 00:06:54.019 START TEST rpc_integrity 00:06:54.019 ************************************ 00:06:54.019 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:54.019 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:54.019 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.019 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.019 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.019 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:54.019 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:54.019 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:54.019 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:54.019 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.019 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.020 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.020 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:54.020 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:54.020 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.020 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.278 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.278 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:54.278 { 00:06:54.278 "name": "Malloc0", 00:06:54.278 "aliases": [ 00:06:54.278 "9bf984cf-7a77-44be-9b51-e049c33152ef" 00:06:54.278 ], 00:06:54.278 "product_name": "Malloc disk", 00:06:54.278 "block_size": 512, 00:06:54.278 "num_blocks": 16384, 00:06:54.278 "uuid": "9bf984cf-7a77-44be-9b51-e049c33152ef", 00:06:54.278 "assigned_rate_limits": { 00:06:54.278 "rw_ios_per_sec": 0, 00:06:54.278 "rw_mbytes_per_sec": 0, 00:06:54.278 "r_mbytes_per_sec": 0, 00:06:54.278 "w_mbytes_per_sec": 0 00:06:54.278 }, 00:06:54.278 "claimed": false, 00:06:54.278 "zoned": false, 00:06:54.278 "supported_io_types": { 00:06:54.278 "read": true, 00:06:54.278 "write": true, 00:06:54.278 "unmap": true, 00:06:54.278 "flush": true, 00:06:54.278 "reset": true, 00:06:54.278 "nvme_admin": false, 00:06:54.278 "nvme_io": false, 00:06:54.278 "nvme_io_md": false, 00:06:54.278 "write_zeroes": true, 00:06:54.278 "zcopy": true, 00:06:54.278 "get_zone_info": false, 00:06:54.278 "zone_management": false, 00:06:54.278 "zone_append": false, 00:06:54.278 "compare": false, 00:06:54.278 "compare_and_write": false, 00:06:54.278 "abort": true, 00:06:54.278 "seek_hole": false, 00:06:54.278 "seek_data": false, 00:06:54.278 "copy": true, 00:06:54.278 "nvme_iov_md": false 00:06:54.278 }, 00:06:54.278 "memory_domains": [ 00:06:54.278 { 00:06:54.278 "dma_device_id": "system", 00:06:54.278 "dma_device_type": 1 00:06:54.278 }, 00:06:54.278 { 00:06:54.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.278 "dma_device_type": 2 00:06:54.278 } 00:06:54.278 ], 00:06:54.278 "driver_specific": {} 00:06:54.278 } 00:06:54.278 ]' 00:06:54.278 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:54.278 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:54.278 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:54.278 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.278 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.278 [2024-11-19 08:28:15.980783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:54.278 [2024-11-19 08:28:15.980869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.278 [2024-11-19 08:28:15.980921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:54.278 [2024-11-19 08:28:15.980934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.279 [2024-11-19 08:28:15.983590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.279 [2024-11-19 08:28:15.983631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:54.279 Passthru0 00:06:54.279 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.279 08:28:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:54.279 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.279 08:28:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:54.279 { 00:06:54.279 "name": "Malloc0", 00:06:54.279 "aliases": [ 00:06:54.279 "9bf984cf-7a77-44be-9b51-e049c33152ef" 00:06:54.279 ], 00:06:54.279 "product_name": "Malloc disk", 00:06:54.279 "block_size": 512, 00:06:54.279 "num_blocks": 16384, 00:06:54.279 "uuid": "9bf984cf-7a77-44be-9b51-e049c33152ef", 00:06:54.279 "assigned_rate_limits": { 00:06:54.279 "rw_ios_per_sec": 0, 00:06:54.279 "rw_mbytes_per_sec": 0, 00:06:54.279 "r_mbytes_per_sec": 0, 00:06:54.279 "w_mbytes_per_sec": 0 00:06:54.279 }, 00:06:54.279 "claimed": true, 00:06:54.279 "claim_type": "exclusive_write", 00:06:54.279 "zoned": false, 00:06:54.279 "supported_io_types": { 00:06:54.279 "read": true, 00:06:54.279 "write": true, 00:06:54.279 "unmap": true, 00:06:54.279 "flush": true, 00:06:54.279 "reset": true, 00:06:54.279 "nvme_admin": false, 00:06:54.279 "nvme_io": false, 00:06:54.279 "nvme_io_md": false, 00:06:54.279 "write_zeroes": true, 00:06:54.279 "zcopy": true, 00:06:54.279 "get_zone_info": false, 00:06:54.279 "zone_management": false, 00:06:54.279 "zone_append": false, 00:06:54.279 "compare": false, 00:06:54.279 "compare_and_write": false, 00:06:54.279 "abort": true, 00:06:54.279 "seek_hole": false, 00:06:54.279 "seek_data": false, 00:06:54.279 "copy": true, 00:06:54.279 "nvme_iov_md": false 00:06:54.279 }, 00:06:54.279 "memory_domains": [ 00:06:54.279 { 00:06:54.279 "dma_device_id": "system", 00:06:54.279 "dma_device_type": 1 00:06:54.279 }, 00:06:54.279 { 00:06:54.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.279 "dma_device_type": 2 00:06:54.279 } 00:06:54.279 ], 00:06:54.279 "driver_specific": {} 00:06:54.279 }, 00:06:54.279 { 00:06:54.279 "name": "Passthru0", 00:06:54.279 "aliases": [ 00:06:54.279 "e6362a7f-52ff-5aa0-9851-b945213bb312" 00:06:54.279 ], 00:06:54.279 "product_name": "passthru", 00:06:54.279 "block_size": 512, 00:06:54.279 "num_blocks": 16384, 00:06:54.279 "uuid": "e6362a7f-52ff-5aa0-9851-b945213bb312", 00:06:54.279 "assigned_rate_limits": { 00:06:54.279 "rw_ios_per_sec": 0, 00:06:54.279 "rw_mbytes_per_sec": 0, 00:06:54.279 "r_mbytes_per_sec": 0, 00:06:54.279 "w_mbytes_per_sec": 0 00:06:54.279 }, 00:06:54.279 "claimed": false, 00:06:54.279 "zoned": false, 00:06:54.279 "supported_io_types": { 00:06:54.279 "read": true, 00:06:54.279 "write": true, 00:06:54.279 "unmap": true, 00:06:54.279 "flush": true, 00:06:54.279 "reset": true, 00:06:54.279 "nvme_admin": false, 00:06:54.279 "nvme_io": false, 00:06:54.279 "nvme_io_md": false, 00:06:54.279 "write_zeroes": true, 00:06:54.279 "zcopy": true, 00:06:54.279 "get_zone_info": false, 00:06:54.279 "zone_management": false, 00:06:54.279 "zone_append": false, 00:06:54.279 "compare": false, 00:06:54.279 "compare_and_write": false, 00:06:54.279 "abort": true, 00:06:54.279 "seek_hole": false, 00:06:54.279 "seek_data": false, 00:06:54.279 "copy": true, 00:06:54.279 "nvme_iov_md": false 00:06:54.279 }, 00:06:54.279 "memory_domains": [ 00:06:54.279 { 00:06:54.279 "dma_device_id": "system", 00:06:54.279 "dma_device_type": 1 00:06:54.279 }, 00:06:54.279 { 00:06:54.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.279 "dma_device_type": 2 00:06:54.279 } 00:06:54.279 ], 00:06:54.279 "driver_specific": { 00:06:54.279 "passthru": { 00:06:54.279 "name": "Passthru0", 00:06:54.279 "base_bdev_name": "Malloc0" 00:06:54.279 } 00:06:54.279 } 00:06:54.279 } 00:06:54.279 ]' 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:54.279 08:28:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:54.279 00:06:54.279 real 0m0.334s 00:06:54.279 user 0m0.200s 00:06:54.279 sys 0m0.059s 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.279 08:28:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 ************************************ 00:06:54.279 END TEST rpc_integrity 00:06:54.279 ************************************ 00:06:54.537 08:28:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:54.537 08:28:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.537 08:28:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.537 08:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.537 ************************************ 00:06:54.537 START TEST rpc_plugins 00:06:54.537 ************************************ 00:06:54.537 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:54.537 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:54.537 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.537 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:54.537 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.537 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:54.537 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:54.538 { 00:06:54.538 "name": "Malloc1", 00:06:54.538 "aliases": [ 00:06:54.538 "b1a3f202-f6cf-4c3d-bc49-a1f77ce58f5a" 00:06:54.538 ], 00:06:54.538 "product_name": "Malloc disk", 00:06:54.538 "block_size": 4096, 00:06:54.538 "num_blocks": 256, 00:06:54.538 "uuid": "b1a3f202-f6cf-4c3d-bc49-a1f77ce58f5a", 00:06:54.538 "assigned_rate_limits": { 00:06:54.538 "rw_ios_per_sec": 0, 00:06:54.538 "rw_mbytes_per_sec": 0, 00:06:54.538 "r_mbytes_per_sec": 0, 00:06:54.538 "w_mbytes_per_sec": 0 00:06:54.538 }, 00:06:54.538 "claimed": false, 00:06:54.538 "zoned": false, 00:06:54.538 "supported_io_types": { 00:06:54.538 "read": true, 00:06:54.538 "write": true, 00:06:54.538 "unmap": true, 00:06:54.538 "flush": true, 00:06:54.538 "reset": true, 00:06:54.538 "nvme_admin": false, 00:06:54.538 "nvme_io": false, 00:06:54.538 "nvme_io_md": false, 00:06:54.538 "write_zeroes": true, 00:06:54.538 "zcopy": true, 00:06:54.538 "get_zone_info": false, 00:06:54.538 "zone_management": false, 00:06:54.538 "zone_append": false, 00:06:54.538 "compare": false, 00:06:54.538 "compare_and_write": false, 00:06:54.538 "abort": true, 00:06:54.538 "seek_hole": false, 00:06:54.538 "seek_data": false, 00:06:54.538 "copy": true, 00:06:54.538 "nvme_iov_md": false 00:06:54.538 }, 00:06:54.538 "memory_domains": [ 00:06:54.538 { 00:06:54.538 "dma_device_id": "system", 00:06:54.538 "dma_device_type": 1 00:06:54.538 }, 00:06:54.538 { 00:06:54.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.538 "dma_device_type": 2 00:06:54.538 } 00:06:54.538 ], 00:06:54.538 "driver_specific": {} 00:06:54.538 } 00:06:54.538 ]' 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:54.538 08:28:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:54.538 00:06:54.538 real 0m0.145s 00:06:54.538 user 0m0.084s 00:06:54.538 sys 0m0.023s 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.538 08:28:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 ************************************ 00:06:54.538 END TEST rpc_plugins 00:06:54.538 ************************************ 00:06:54.538 08:28:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:54.538 08:28:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.538 08:28:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.538 08:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.538 ************************************ 00:06:54.538 START TEST rpc_trace_cmd_test 00:06:54.538 ************************************ 00:06:54.538 08:28:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:54.538 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:54.538 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:54.538 08:28:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.538 08:28:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:54.798 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70176", 00:06:54.798 "tpoint_group_mask": "0x8", 00:06:54.798 "iscsi_conn": { 00:06:54.798 "mask": "0x2", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "scsi": { 00:06:54.798 "mask": "0x4", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "bdev": { 00:06:54.798 "mask": "0x8", 00:06:54.798 "tpoint_mask": "0xffffffffffffffff" 00:06:54.798 }, 00:06:54.798 "nvmf_rdma": { 00:06:54.798 "mask": "0x10", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "nvmf_tcp": { 00:06:54.798 "mask": "0x20", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "ftl": { 00:06:54.798 "mask": "0x40", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "blobfs": { 00:06:54.798 "mask": "0x80", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "dsa": { 00:06:54.798 "mask": "0x200", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "thread": { 00:06:54.798 "mask": "0x400", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "nvme_pcie": { 00:06:54.798 "mask": "0x800", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "iaa": { 00:06:54.798 "mask": "0x1000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "nvme_tcp": { 00:06:54.798 "mask": "0x2000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "bdev_nvme": { 00:06:54.798 "mask": "0x4000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "sock": { 00:06:54.798 "mask": "0x8000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "blob": { 00:06:54.798 "mask": "0x10000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "bdev_raid": { 00:06:54.798 "mask": "0x20000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 }, 00:06:54.798 "scheduler": { 00:06:54.798 "mask": "0x40000", 00:06:54.798 "tpoint_mask": "0x0" 00:06:54.798 } 00:06:54.798 }' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:54.798 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:54.799 08:28:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:54.799 00:06:54.799 real 0m0.235s 00:06:54.799 user 0m0.181s 00:06:54.799 sys 0m0.043s 00:06:54.799 08:28:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.799 08:28:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.799 ************************************ 00:06:54.799 END TEST rpc_trace_cmd_test 00:06:54.799 ************************************ 00:06:55.058 08:28:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:55.058 08:28:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:55.058 08:28:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:55.059 08:28:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.059 08:28:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.059 08:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 ************************************ 00:06:55.059 START TEST rpc_daemon_integrity 00:06:55.059 ************************************ 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:55.059 { 00:06:55.059 "name": "Malloc2", 00:06:55.059 "aliases": [ 00:06:55.059 "a47c6a9d-8aa2-496e-9009-63b8c8623fbf" 00:06:55.059 ], 00:06:55.059 "product_name": "Malloc disk", 00:06:55.059 "block_size": 512, 00:06:55.059 "num_blocks": 16384, 00:06:55.059 "uuid": "a47c6a9d-8aa2-496e-9009-63b8c8623fbf", 00:06:55.059 "assigned_rate_limits": { 00:06:55.059 "rw_ios_per_sec": 0, 00:06:55.059 "rw_mbytes_per_sec": 0, 00:06:55.059 "r_mbytes_per_sec": 0, 00:06:55.059 "w_mbytes_per_sec": 0 00:06:55.059 }, 00:06:55.059 "claimed": false, 00:06:55.059 "zoned": false, 00:06:55.059 "supported_io_types": { 00:06:55.059 "read": true, 00:06:55.059 "write": true, 00:06:55.059 "unmap": true, 00:06:55.059 "flush": true, 00:06:55.059 "reset": true, 00:06:55.059 "nvme_admin": false, 00:06:55.059 "nvme_io": false, 00:06:55.059 "nvme_io_md": false, 00:06:55.059 "write_zeroes": true, 00:06:55.059 "zcopy": true, 00:06:55.059 "get_zone_info": false, 00:06:55.059 "zone_management": false, 00:06:55.059 "zone_append": false, 00:06:55.059 "compare": false, 00:06:55.059 "compare_and_write": false, 00:06:55.059 "abort": true, 00:06:55.059 "seek_hole": false, 00:06:55.059 "seek_data": false, 00:06:55.059 "copy": true, 00:06:55.059 "nvme_iov_md": false 00:06:55.059 }, 00:06:55.059 "memory_domains": [ 00:06:55.059 { 00:06:55.059 "dma_device_id": "system", 00:06:55.059 "dma_device_type": 1 00:06:55.059 }, 00:06:55.059 { 00:06:55.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.059 "dma_device_type": 2 00:06:55.059 } 00:06:55.059 ], 00:06:55.059 "driver_specific": {} 00:06:55.059 } 00:06:55.059 ]' 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 [2024-11-19 08:28:16.876315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:55.059 [2024-11-19 08:28:16.876396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.059 [2024-11-19 08:28:16.876428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:55.059 [2024-11-19 08:28:16.876441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.059 [2024-11-19 08:28:16.879013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.059 [2024-11-19 08:28:16.879051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:55.059 Passthru0 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:55.059 { 00:06:55.059 "name": "Malloc2", 00:06:55.059 "aliases": [ 00:06:55.059 "a47c6a9d-8aa2-496e-9009-63b8c8623fbf" 00:06:55.059 ], 00:06:55.059 "product_name": "Malloc disk", 00:06:55.059 "block_size": 512, 00:06:55.059 "num_blocks": 16384, 00:06:55.059 "uuid": "a47c6a9d-8aa2-496e-9009-63b8c8623fbf", 00:06:55.059 "assigned_rate_limits": { 00:06:55.059 "rw_ios_per_sec": 0, 00:06:55.059 "rw_mbytes_per_sec": 0, 00:06:55.059 "r_mbytes_per_sec": 0, 00:06:55.059 "w_mbytes_per_sec": 0 00:06:55.059 }, 00:06:55.059 "claimed": true, 00:06:55.059 "claim_type": "exclusive_write", 00:06:55.059 "zoned": false, 00:06:55.059 "supported_io_types": { 00:06:55.059 "read": true, 00:06:55.059 "write": true, 00:06:55.059 "unmap": true, 00:06:55.059 "flush": true, 00:06:55.059 "reset": true, 00:06:55.059 "nvme_admin": false, 00:06:55.059 "nvme_io": false, 00:06:55.059 "nvme_io_md": false, 00:06:55.059 "write_zeroes": true, 00:06:55.059 "zcopy": true, 00:06:55.059 "get_zone_info": false, 00:06:55.059 "zone_management": false, 00:06:55.059 "zone_append": false, 00:06:55.059 "compare": false, 00:06:55.059 "compare_and_write": false, 00:06:55.059 "abort": true, 00:06:55.059 "seek_hole": false, 00:06:55.059 "seek_data": false, 00:06:55.059 "copy": true, 00:06:55.059 "nvme_iov_md": false 00:06:55.059 }, 00:06:55.059 "memory_domains": [ 00:06:55.059 { 00:06:55.059 "dma_device_id": "system", 00:06:55.059 "dma_device_type": 1 00:06:55.059 }, 00:06:55.059 { 00:06:55.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.059 "dma_device_type": 2 00:06:55.059 } 00:06:55.059 ], 00:06:55.059 "driver_specific": {} 00:06:55.059 }, 00:06:55.059 { 00:06:55.059 "name": "Passthru0", 00:06:55.059 "aliases": [ 00:06:55.059 "b1d6bc02-bb91-5613-ac16-0691e421d319" 00:06:55.059 ], 00:06:55.059 "product_name": "passthru", 00:06:55.059 "block_size": 512, 00:06:55.059 "num_blocks": 16384, 00:06:55.059 "uuid": "b1d6bc02-bb91-5613-ac16-0691e421d319", 00:06:55.059 "assigned_rate_limits": { 00:06:55.059 "rw_ios_per_sec": 0, 00:06:55.059 "rw_mbytes_per_sec": 0, 00:06:55.059 "r_mbytes_per_sec": 0, 00:06:55.059 "w_mbytes_per_sec": 0 00:06:55.059 }, 00:06:55.059 "claimed": false, 00:06:55.059 "zoned": false, 00:06:55.059 "supported_io_types": { 00:06:55.059 "read": true, 00:06:55.059 "write": true, 00:06:55.059 "unmap": true, 00:06:55.059 "flush": true, 00:06:55.059 "reset": true, 00:06:55.059 "nvme_admin": false, 00:06:55.059 "nvme_io": false, 00:06:55.059 "nvme_io_md": false, 00:06:55.059 "write_zeroes": true, 00:06:55.059 "zcopy": true, 00:06:55.059 "get_zone_info": false, 00:06:55.059 "zone_management": false, 00:06:55.059 "zone_append": false, 00:06:55.059 "compare": false, 00:06:55.059 "compare_and_write": false, 00:06:55.059 "abort": true, 00:06:55.059 "seek_hole": false, 00:06:55.059 "seek_data": false, 00:06:55.059 "copy": true, 00:06:55.059 "nvme_iov_md": false 00:06:55.059 }, 00:06:55.059 "memory_domains": [ 00:06:55.059 { 00:06:55.059 "dma_device_id": "system", 00:06:55.059 "dma_device_type": 1 00:06:55.059 }, 00:06:55.059 { 00:06:55.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.059 "dma_device_type": 2 00:06:55.059 } 00:06:55.059 ], 00:06:55.059 "driver_specific": { 00:06:55.059 "passthru": { 00:06:55.059 "name": "Passthru0", 00:06:55.059 "base_bdev_name": "Malloc2" 00:06:55.059 } 00:06:55.059 } 00:06:55.059 } 00:06:55.059 ]' 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.059 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:55.318 08:28:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:55.318 08:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:55.318 00:06:55.318 real 0m0.316s 00:06:55.318 user 0m0.191s 00:06:55.318 sys 0m0.054s 00:06:55.318 08:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.318 08:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 ************************************ 00:06:55.318 END TEST rpc_daemon_integrity 00:06:55.318 ************************************ 00:06:55.318 08:28:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:55.318 08:28:17 rpc -- rpc/rpc.sh@84 -- # killprocess 70176 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@954 -- # '[' -z 70176 ']' 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@958 -- # kill -0 70176 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@959 -- # uname 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70176 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.318 killing process with pid 70176 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70176' 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@973 -- # kill 70176 00:06:55.318 08:28:17 rpc -- common/autotest_common.sh@978 -- # wait 70176 00:06:55.885 00:06:55.885 real 0m2.848s 00:06:55.885 user 0m3.444s 00:06:55.885 sys 0m0.856s 00:06:55.885 08:28:17 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.885 08:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.885 ************************************ 00:06:55.885 END TEST rpc 00:06:55.885 ************************************ 00:06:55.885 08:28:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:55.885 08:28:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.885 08:28:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.885 08:28:17 -- common/autotest_common.sh@10 -- # set +x 00:06:55.885 ************************************ 00:06:55.885 START TEST skip_rpc 00:06:55.885 ************************************ 00:06:55.885 08:28:17 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:55.885 * Looking for test storage... 00:06:55.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:55.885 08:28:17 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.885 08:28:17 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.885 08:28:17 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.885 08:28:17 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:55.885 08:28:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.144 08:28:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:56.144 08:28:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.144 08:28:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.144 08:28:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.144 08:28:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.144 --rc genhtml_branch_coverage=1 00:06:56.144 --rc genhtml_function_coverage=1 00:06:56.144 --rc genhtml_legend=1 00:06:56.144 --rc geninfo_all_blocks=1 00:06:56.144 --rc geninfo_unexecuted_blocks=1 00:06:56.144 00:06:56.144 ' 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.144 --rc genhtml_branch_coverage=1 00:06:56.144 --rc genhtml_function_coverage=1 00:06:56.144 --rc genhtml_legend=1 00:06:56.144 --rc geninfo_all_blocks=1 00:06:56.144 --rc geninfo_unexecuted_blocks=1 00:06:56.144 00:06:56.144 ' 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.144 --rc genhtml_branch_coverage=1 00:06:56.144 --rc genhtml_function_coverage=1 00:06:56.144 --rc genhtml_legend=1 00:06:56.144 --rc geninfo_all_blocks=1 00:06:56.144 --rc geninfo_unexecuted_blocks=1 00:06:56.144 00:06:56.144 ' 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.144 --rc genhtml_branch_coverage=1 00:06:56.144 --rc genhtml_function_coverage=1 00:06:56.144 --rc genhtml_legend=1 00:06:56.144 --rc geninfo_all_blocks=1 00:06:56.144 --rc geninfo_unexecuted_blocks=1 00:06:56.144 00:06:56.144 ' 00:06:56.144 08:28:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:56.144 08:28:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:56.144 08:28:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.144 08:28:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.144 ************************************ 00:06:56.144 START TEST skip_rpc 00:06:56.144 ************************************ 00:06:56.144 08:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:56.144 08:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70383 00:06:56.144 08:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:56.144 08:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.144 08:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:56.144 [2024-11-19 08:28:17.911315] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:56.144 [2024-11-19 08:28:17.911448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70383 ] 00:06:56.403 [2024-11-19 08:28:18.068761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.403 [2024-11-19 08:28:18.097504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.672 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70383 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70383 ']' 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70383 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70383 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.673 killing process with pid 70383 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70383' 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70383 00:07:01.673 08:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70383 00:07:01.673 00:07:01.673 real 0m5.428s 00:07:01.673 user 0m5.038s 00:07:01.673 sys 0m0.318s 00:07:01.673 08:28:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.673 08:28:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.673 ************************************ 00:07:01.673 END TEST skip_rpc 00:07:01.673 ************************************ 00:07:01.673 08:28:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:01.673 08:28:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.673 08:28:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.673 08:28:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.673 ************************************ 00:07:01.673 START TEST skip_rpc_with_json 00:07:01.673 ************************************ 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70465 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70465 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 70465 ']' 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.673 08:28:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:01.673 [2024-11-19 08:28:23.389207] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:01.673 [2024-11-19 08:28:23.389347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:07:01.673 [2024-11-19 08:28:23.541895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.673 [2024-11-19 08:28:23.571386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:02.609 [2024-11-19 08:28:24.222157] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:02.609 request: 00:07:02.609 { 00:07:02.609 "trtype": "tcp", 00:07:02.609 "method": "nvmf_get_transports", 00:07:02.609 "req_id": 1 00:07:02.609 } 00:07:02.609 Got JSON-RPC error response 00:07:02.609 response: 00:07:02.609 { 00:07:02.609 "code": -19, 00:07:02.609 "message": "No such device" 00:07:02.609 } 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:02.609 [2024-11-19 08:28:24.234298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.609 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:02.609 { 00:07:02.609 "subsystems": [ 00:07:02.609 { 00:07:02.609 "subsystem": "fsdev", 00:07:02.609 "config": [ 00:07:02.609 { 00:07:02.609 "method": "fsdev_set_opts", 00:07:02.609 "params": { 00:07:02.609 "fsdev_io_pool_size": 65535, 00:07:02.609 "fsdev_io_cache_size": 256 00:07:02.609 } 00:07:02.609 } 00:07:02.609 ] 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "subsystem": "keyring", 00:07:02.609 "config": [] 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "subsystem": "iobuf", 00:07:02.609 "config": [ 00:07:02.609 { 00:07:02.609 "method": "iobuf_set_options", 00:07:02.609 "params": { 00:07:02.609 "small_pool_count": 8192, 00:07:02.609 "large_pool_count": 1024, 00:07:02.609 "small_bufsize": 8192, 00:07:02.609 "large_bufsize": 135168, 00:07:02.609 "enable_numa": false 00:07:02.609 } 00:07:02.609 } 00:07:02.609 ] 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "subsystem": "sock", 00:07:02.609 "config": [ 00:07:02.609 { 00:07:02.609 "method": "sock_set_default_impl", 00:07:02.609 "params": { 00:07:02.609 "impl_name": "posix" 00:07:02.609 } 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "method": "sock_impl_set_options", 00:07:02.609 "params": { 00:07:02.609 "impl_name": "ssl", 00:07:02.609 "recv_buf_size": 4096, 00:07:02.609 "send_buf_size": 4096, 00:07:02.609 "enable_recv_pipe": true, 00:07:02.609 "enable_quickack": false, 00:07:02.609 "enable_placement_id": 0, 00:07:02.609 "enable_zerocopy_send_server": true, 00:07:02.609 "enable_zerocopy_send_client": false, 00:07:02.609 "zerocopy_threshold": 0, 00:07:02.609 "tls_version": 0, 00:07:02.609 "enable_ktls": false 00:07:02.609 } 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "method": "sock_impl_set_options", 00:07:02.609 "params": { 00:07:02.609 "impl_name": "posix", 00:07:02.609 "recv_buf_size": 2097152, 00:07:02.609 "send_buf_size": 2097152, 00:07:02.609 "enable_recv_pipe": true, 00:07:02.609 "enable_quickack": false, 00:07:02.609 "enable_placement_id": 0, 00:07:02.609 "enable_zerocopy_send_server": true, 00:07:02.609 "enable_zerocopy_send_client": false, 00:07:02.609 "zerocopy_threshold": 0, 00:07:02.609 "tls_version": 0, 00:07:02.609 "enable_ktls": false 00:07:02.609 } 00:07:02.609 } 00:07:02.609 ] 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "subsystem": "vmd", 00:07:02.609 "config": [] 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "subsystem": "accel", 00:07:02.609 "config": [ 00:07:02.609 { 00:07:02.609 "method": "accel_set_options", 00:07:02.609 "params": { 00:07:02.609 "small_cache_size": 128, 00:07:02.609 "large_cache_size": 16, 00:07:02.609 "task_count": 2048, 00:07:02.609 "sequence_count": 2048, 00:07:02.609 "buf_count": 2048 00:07:02.609 } 00:07:02.609 } 00:07:02.609 ] 00:07:02.609 }, 00:07:02.609 { 00:07:02.609 "subsystem": "bdev", 00:07:02.609 "config": [ 00:07:02.609 { 00:07:02.609 "method": "bdev_set_options", 00:07:02.609 "params": { 00:07:02.609 "bdev_io_pool_size": 65535, 00:07:02.609 "bdev_io_cache_size": 256, 00:07:02.609 "bdev_auto_examine": true, 00:07:02.609 "iobuf_small_cache_size": 128, 00:07:02.609 "iobuf_large_cache_size": 16 00:07:02.609 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "bdev_raid_set_options", 00:07:02.610 "params": { 00:07:02.610 "process_window_size_kb": 1024, 00:07:02.610 "process_max_bandwidth_mb_sec": 0 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "bdev_iscsi_set_options", 00:07:02.610 "params": { 00:07:02.610 "timeout_sec": 30 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "bdev_nvme_set_options", 00:07:02.610 "params": { 00:07:02.610 "action_on_timeout": "none", 00:07:02.610 "timeout_us": 0, 00:07:02.610 "timeout_admin_us": 0, 00:07:02.610 "keep_alive_timeout_ms": 10000, 00:07:02.610 "arbitration_burst": 0, 00:07:02.610 "low_priority_weight": 0, 00:07:02.610 "medium_priority_weight": 0, 00:07:02.610 "high_priority_weight": 0, 00:07:02.610 "nvme_adminq_poll_period_us": 10000, 00:07:02.610 "nvme_ioq_poll_period_us": 0, 00:07:02.610 "io_queue_requests": 0, 00:07:02.610 "delay_cmd_submit": true, 00:07:02.610 "transport_retry_count": 4, 00:07:02.610 "bdev_retry_count": 3, 00:07:02.610 "transport_ack_timeout": 0, 00:07:02.610 "ctrlr_loss_timeout_sec": 0, 00:07:02.610 "reconnect_delay_sec": 0, 00:07:02.610 "fast_io_fail_timeout_sec": 0, 00:07:02.610 "disable_auto_failback": false, 00:07:02.610 "generate_uuids": false, 00:07:02.610 "transport_tos": 0, 00:07:02.610 "nvme_error_stat": false, 00:07:02.610 "rdma_srq_size": 0, 00:07:02.610 "io_path_stat": false, 00:07:02.610 "allow_accel_sequence": false, 00:07:02.610 "rdma_max_cq_size": 0, 00:07:02.610 "rdma_cm_event_timeout_ms": 0, 00:07:02.610 "dhchap_digests": [ 00:07:02.610 "sha256", 00:07:02.610 "sha384", 00:07:02.610 "sha512" 00:07:02.610 ], 00:07:02.610 "dhchap_dhgroups": [ 00:07:02.610 "null", 00:07:02.610 "ffdhe2048", 00:07:02.610 "ffdhe3072", 00:07:02.610 "ffdhe4096", 00:07:02.610 "ffdhe6144", 00:07:02.610 "ffdhe8192" 00:07:02.610 ] 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "bdev_nvme_set_hotplug", 00:07:02.610 "params": { 00:07:02.610 "period_us": 100000, 00:07:02.610 "enable": false 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "bdev_wait_for_examine" 00:07:02.610 } 00:07:02.610 ] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "scsi", 00:07:02.610 "config": null 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "scheduler", 00:07:02.610 "config": [ 00:07:02.610 { 00:07:02.610 "method": "framework_set_scheduler", 00:07:02.610 "params": { 00:07:02.610 "name": "static" 00:07:02.610 } 00:07:02.610 } 00:07:02.610 ] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "vhost_scsi", 00:07:02.610 "config": [] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "vhost_blk", 00:07:02.610 "config": [] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "ublk", 00:07:02.610 "config": [] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "nbd", 00:07:02.610 "config": [] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "nvmf", 00:07:02.610 "config": [ 00:07:02.610 { 00:07:02.610 "method": "nvmf_set_config", 00:07:02.610 "params": { 00:07:02.610 "discovery_filter": "match_any", 00:07:02.610 "admin_cmd_passthru": { 00:07:02.610 "identify_ctrlr": false 00:07:02.610 }, 00:07:02.610 "dhchap_digests": [ 00:07:02.610 "sha256", 00:07:02.610 "sha384", 00:07:02.610 "sha512" 00:07:02.610 ], 00:07:02.610 "dhchap_dhgroups": [ 00:07:02.610 "null", 00:07:02.610 "ffdhe2048", 00:07:02.610 "ffdhe3072", 00:07:02.610 "ffdhe4096", 00:07:02.610 "ffdhe6144", 00:07:02.610 "ffdhe8192" 00:07:02.610 ] 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "nvmf_set_max_subsystems", 00:07:02.610 "params": { 00:07:02.610 "max_subsystems": 1024 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "nvmf_set_crdt", 00:07:02.610 "params": { 00:07:02.610 "crdt1": 0, 00:07:02.610 "crdt2": 0, 00:07:02.610 "crdt3": 0 00:07:02.610 } 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "method": "nvmf_create_transport", 00:07:02.610 "params": { 00:07:02.610 "trtype": "TCP", 00:07:02.610 "max_queue_depth": 128, 00:07:02.610 "max_io_qpairs_per_ctrlr": 127, 00:07:02.610 "in_capsule_data_size": 4096, 00:07:02.610 "max_io_size": 131072, 00:07:02.610 "io_unit_size": 131072, 00:07:02.610 "max_aq_depth": 128, 00:07:02.610 "num_shared_buffers": 511, 00:07:02.610 "buf_cache_size": 4294967295, 00:07:02.610 "dif_insert_or_strip": false, 00:07:02.610 "zcopy": false, 00:07:02.610 "c2h_success": true, 00:07:02.610 "sock_priority": 0, 00:07:02.610 "abort_timeout_sec": 1, 00:07:02.610 "ack_timeout": 0, 00:07:02.610 "data_wr_pool_size": 0 00:07:02.610 } 00:07:02.610 } 00:07:02.610 ] 00:07:02.610 }, 00:07:02.610 { 00:07:02.610 "subsystem": "iscsi", 00:07:02.610 "config": [ 00:07:02.610 { 00:07:02.610 "method": "iscsi_set_options", 00:07:02.610 "params": { 00:07:02.610 "node_base": "iqn.2016-06.io.spdk", 00:07:02.610 "max_sessions": 128, 00:07:02.610 "max_connections_per_session": 2, 00:07:02.610 "max_queue_depth": 64, 00:07:02.610 "default_time2wait": 2, 00:07:02.610 "default_time2retain": 20, 00:07:02.610 "first_burst_length": 8192, 00:07:02.610 "immediate_data": true, 00:07:02.610 "allow_duplicated_isid": false, 00:07:02.610 "error_recovery_level": 0, 00:07:02.610 "nop_timeout": 60, 00:07:02.610 "nop_in_interval": 30, 00:07:02.610 "disable_chap": false, 00:07:02.610 "require_chap": false, 00:07:02.610 "mutual_chap": false, 00:07:02.610 "chap_group": 0, 00:07:02.610 "max_large_datain_per_connection": 64, 00:07:02.610 "max_r2t_per_connection": 4, 00:07:02.610 "pdu_pool_size": 36864, 00:07:02.610 "immediate_data_pool_size": 16384, 00:07:02.610 "data_out_pool_size": 2048 00:07:02.610 } 00:07:02.610 } 00:07:02.610 ] 00:07:02.610 } 00:07:02.610 ] 00:07:02.610 } 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70465 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70465 ']' 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70465 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70465 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.610 killing process with pid 70465 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70465' 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70465 00:07:02.610 08:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70465 00:07:03.181 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70498 00:07:03.181 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:03.181 08:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70498 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70498 ']' 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70498 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70498 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.446 killing process with pid 70498 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70498' 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70498 00:07:08.446 08:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70498 00:07:08.446 08:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:08.447 00:07:08.447 real 0m6.957s 00:07:08.447 user 0m6.559s 00:07:08.447 sys 0m0.697s 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 ************************************ 00:07:08.447 END TEST skip_rpc_with_json 00:07:08.447 ************************************ 00:07:08.447 08:28:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:08.447 08:28:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.447 08:28:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.447 08:28:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 ************************************ 00:07:08.447 START TEST skip_rpc_with_delay 00:07:08.447 ************************************ 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:08.447 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:08.705 [2024-11-19 08:28:30.416845] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:08.705 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:08.705 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.705 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.705 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.705 00:07:08.705 real 0m0.170s 00:07:08.705 user 0m0.093s 00:07:08.705 sys 0m0.075s 00:07:08.705 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.705 08:28:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:08.705 ************************************ 00:07:08.705 END TEST skip_rpc_with_delay 00:07:08.705 ************************************ 00:07:08.705 08:28:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:08.705 08:28:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:08.705 08:28:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:08.705 08:28:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.705 08:28:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.705 08:28:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.705 ************************************ 00:07:08.705 START TEST exit_on_failed_rpc_init 00:07:08.705 ************************************ 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70605 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70605 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 70605 ']' 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.705 08:28:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:08.962 [2024-11-19 08:28:30.653140] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:08.962 [2024-11-19 08:28:30.653311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70605 ] 00:07:08.962 [2024-11-19 08:28:30.810715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.962 [2024-11-19 08:28:30.840070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:09.897 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:09.897 [2024-11-19 08:28:31.648354] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:09.897 [2024-11-19 08:28:31.648509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70623 ] 00:07:09.897 [2024-11-19 08:28:31.794154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.157 [2024-11-19 08:28:31.824367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.157 [2024-11-19 08:28:31.824488] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:10.157 [2024-11-19 08:28:31.824507] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:10.157 [2024-11-19 08:28:31.824519] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70605 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 70605 ']' 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 70605 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70605 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70605' 00:07:10.157 killing process with pid 70605 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 70605 00:07:10.157 08:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 70605 00:07:10.417 00:07:10.417 real 0m1.770s 00:07:10.417 user 0m1.964s 00:07:10.417 sys 0m0.479s 00:07:10.417 08:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.417 08:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 ************************************ 00:07:10.417 END TEST exit_on_failed_rpc_init 00:07:10.417 ************************************ 00:07:10.677 08:28:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:10.677 00:07:10.677 real 0m14.808s 00:07:10.677 user 0m13.855s 00:07:10.677 sys 0m1.872s 00:07:10.677 08:28:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.677 08:28:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.677 ************************************ 00:07:10.677 END TEST skip_rpc 00:07:10.677 ************************************ 00:07:10.677 08:28:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:10.677 08:28:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.677 08:28:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.677 08:28:32 -- common/autotest_common.sh@10 -- # set +x 00:07:10.677 ************************************ 00:07:10.677 START TEST rpc_client 00:07:10.677 ************************************ 00:07:10.677 08:28:32 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:10.677 * Looking for test storage... 00:07:10.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:10.677 08:28:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.677 08:28:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.677 08:28:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.937 08:28:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.937 08:28:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.938 08:28:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.938 --rc genhtml_branch_coverage=1 00:07:10.938 --rc genhtml_function_coverage=1 00:07:10.938 --rc genhtml_legend=1 00:07:10.938 --rc geninfo_all_blocks=1 00:07:10.938 --rc geninfo_unexecuted_blocks=1 00:07:10.938 00:07:10.938 ' 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.938 --rc genhtml_branch_coverage=1 00:07:10.938 --rc genhtml_function_coverage=1 00:07:10.938 --rc genhtml_legend=1 00:07:10.938 --rc geninfo_all_blocks=1 00:07:10.938 --rc geninfo_unexecuted_blocks=1 00:07:10.938 00:07:10.938 ' 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.938 --rc genhtml_branch_coverage=1 00:07:10.938 --rc genhtml_function_coverage=1 00:07:10.938 --rc genhtml_legend=1 00:07:10.938 --rc geninfo_all_blocks=1 00:07:10.938 --rc geninfo_unexecuted_blocks=1 00:07:10.938 00:07:10.938 ' 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.938 --rc genhtml_branch_coverage=1 00:07:10.938 --rc genhtml_function_coverage=1 00:07:10.938 --rc genhtml_legend=1 00:07:10.938 --rc geninfo_all_blocks=1 00:07:10.938 --rc geninfo_unexecuted_blocks=1 00:07:10.938 00:07:10.938 ' 00:07:10.938 08:28:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:10.938 OK 00:07:10.938 08:28:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:10.938 00:07:10.938 real 0m0.276s 00:07:10.938 user 0m0.149s 00:07:10.938 sys 0m0.146s 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.938 08:28:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:10.938 ************************************ 00:07:10.938 END TEST rpc_client 00:07:10.938 ************************************ 00:07:10.938 08:28:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:10.938 08:28:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.938 08:28:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.938 08:28:32 -- common/autotest_common.sh@10 -- # set +x 00:07:10.938 ************************************ 00:07:10.938 START TEST json_config 00:07:10.938 ************************************ 00:07:10.938 08:28:32 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.199 08:28:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.199 08:28:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.199 08:28:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.199 08:28:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.199 08:28:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.199 08:28:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:11.199 08:28:32 json_config -- scripts/common.sh@345 -- # : 1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.199 08:28:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.199 08:28:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@353 -- # local d=1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.199 08:28:32 json_config -- scripts/common.sh@355 -- # echo 1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.199 08:28:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@353 -- # local d=2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.199 08:28:32 json_config -- scripts/common.sh@355 -- # echo 2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.199 08:28:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.199 08:28:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.199 08:28:32 json_config -- scripts/common.sh@368 -- # return 0 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.199 --rc genhtml_branch_coverage=1 00:07:11.199 --rc genhtml_function_coverage=1 00:07:11.199 --rc genhtml_legend=1 00:07:11.199 --rc geninfo_all_blocks=1 00:07:11.199 --rc geninfo_unexecuted_blocks=1 00:07:11.199 00:07:11.199 ' 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.199 --rc genhtml_branch_coverage=1 00:07:11.199 --rc genhtml_function_coverage=1 00:07:11.199 --rc genhtml_legend=1 00:07:11.199 --rc geninfo_all_blocks=1 00:07:11.199 --rc geninfo_unexecuted_blocks=1 00:07:11.199 00:07:11.199 ' 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.199 --rc genhtml_branch_coverage=1 00:07:11.199 --rc genhtml_function_coverage=1 00:07:11.199 --rc genhtml_legend=1 00:07:11.199 --rc geninfo_all_blocks=1 00:07:11.199 --rc geninfo_unexecuted_blocks=1 00:07:11.199 00:07:11.199 ' 00:07:11.199 08:28:32 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.199 --rc genhtml_branch_coverage=1 00:07:11.199 --rc genhtml_function_coverage=1 00:07:11.199 --rc genhtml_legend=1 00:07:11.199 --rc geninfo_all_blocks=1 00:07:11.199 --rc geninfo_unexecuted_blocks=1 00:07:11.199 00:07:11.199 ' 00:07:11.199 08:28:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bba91b1-3e47-4a92-b42d-fd1dd4e0f3d4 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7bba91b1-3e47-4a92-b42d-fd1dd4e0f3d4 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.199 08:28:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.199 08:28:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.199 08:28:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.199 08:28:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.199 08:28:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.199 08:28:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.199 08:28:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.199 08:28:32 json_config -- paths/export.sh@5 -- # export PATH 00:07:11.199 08:28:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@51 -- # : 0 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.199 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.199 08:28:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.199 08:28:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:11.199 WARNING: No tests are enabled so not running JSON configuration tests 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:11.199 08:28:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:11.199 00:07:11.199 real 0m0.226s 00:07:11.199 user 0m0.144s 00:07:11.199 sys 0m0.090s 00:07:11.199 08:28:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.199 08:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.199 ************************************ 00:07:11.199 END TEST json_config 00:07:11.199 ************************************ 00:07:11.199 08:28:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:11.199 08:28:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.199 08:28:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.199 08:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:11.199 ************************************ 00:07:11.199 START TEST json_config_extra_key 00:07:11.199 ************************************ 00:07:11.200 08:28:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.460 --rc genhtml_branch_coverage=1 00:07:11.460 --rc genhtml_function_coverage=1 00:07:11.460 --rc genhtml_legend=1 00:07:11.460 --rc geninfo_all_blocks=1 00:07:11.460 --rc geninfo_unexecuted_blocks=1 00:07:11.460 00:07:11.460 ' 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.460 --rc genhtml_branch_coverage=1 00:07:11.460 --rc genhtml_function_coverage=1 00:07:11.460 --rc genhtml_legend=1 00:07:11.460 --rc geninfo_all_blocks=1 00:07:11.460 --rc geninfo_unexecuted_blocks=1 00:07:11.460 00:07:11.460 ' 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.460 --rc genhtml_branch_coverage=1 00:07:11.460 --rc genhtml_function_coverage=1 00:07:11.460 --rc genhtml_legend=1 00:07:11.460 --rc geninfo_all_blocks=1 00:07:11.460 --rc geninfo_unexecuted_blocks=1 00:07:11.460 00:07:11.460 ' 00:07:11.460 08:28:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.460 --rc genhtml_branch_coverage=1 00:07:11.460 --rc genhtml_function_coverage=1 00:07:11.460 --rc genhtml_legend=1 00:07:11.460 --rc geninfo_all_blocks=1 00:07:11.460 --rc geninfo_unexecuted_blocks=1 00:07:11.460 00:07:11.460 ' 00:07:11.460 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bba91b1-3e47-4a92-b42d-fd1dd4e0f3d4 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7bba91b1-3e47-4a92-b42d-fd1dd4e0f3d4 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.460 08:28:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.460 08:28:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.460 08:28:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.460 08:28:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.460 08:28:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:11.460 08:28:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:11.460 08:28:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.461 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.461 08:28:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:11.461 INFO: launching applications... 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:11.461 08:28:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70811 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:11.461 Waiting for target to run... 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70811 /var/tmp/spdk_tgt.sock 00:07:11.461 08:28:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70811 ']' 00:07:11.461 08:28:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:11.461 08:28:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:11.461 08:28:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:11.461 08:28:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:11.461 08:28:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.461 08:28:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:11.725 [2024-11-19 08:28:33.374949] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:11.725 [2024-11-19 08:28:33.375102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70811 ] 00:07:11.988 [2024-11-19 08:28:33.732314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.988 [2024-11-19 08:28:33.751298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.559 08:28:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.559 08:28:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:12.559 00:07:12.559 INFO: shutting down applications... 00:07:12.559 08:28:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:12.559 08:28:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70811 ]] 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70811 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70811 00:07:12.559 08:28:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70811 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:12.818 08:28:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:12.818 SPDK target shutdown done 00:07:12.818 Success 00:07:12.818 08:28:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:13.076 00:07:13.076 real 0m1.652s 00:07:13.076 user 0m1.405s 00:07:13.076 sys 0m0.455s 00:07:13.076 08:28:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.076 08:28:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:13.076 ************************************ 00:07:13.076 END TEST json_config_extra_key 00:07:13.076 ************************************ 00:07:13.076 08:28:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:13.076 08:28:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.076 08:28:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.076 08:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:13.076 ************************************ 00:07:13.076 START TEST alias_rpc 00:07:13.076 ************************************ 00:07:13.076 08:28:34 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:13.076 * Looking for test storage... 00:07:13.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:13.076 08:28:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.076 08:28:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.076 08:28:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.335 08:28:34 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:13.335 08:28:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.335 08:28:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:13.335 08:28:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.335 08:28:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.335 08:28:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.335 08:28:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:13.335 08:28:35 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.335 08:28:35 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.335 --rc genhtml_branch_coverage=1 00:07:13.335 --rc genhtml_function_coverage=1 00:07:13.335 --rc genhtml_legend=1 00:07:13.335 --rc geninfo_all_blocks=1 00:07:13.335 --rc geninfo_unexecuted_blocks=1 00:07:13.335 00:07:13.335 ' 00:07:13.335 08:28:35 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.335 --rc genhtml_branch_coverage=1 00:07:13.335 --rc genhtml_function_coverage=1 00:07:13.335 --rc genhtml_legend=1 00:07:13.335 --rc geninfo_all_blocks=1 00:07:13.335 --rc geninfo_unexecuted_blocks=1 00:07:13.335 00:07:13.335 ' 00:07:13.335 08:28:35 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.335 --rc genhtml_branch_coverage=1 00:07:13.335 --rc genhtml_function_coverage=1 00:07:13.335 --rc genhtml_legend=1 00:07:13.335 --rc geninfo_all_blocks=1 00:07:13.335 --rc geninfo_unexecuted_blocks=1 00:07:13.335 00:07:13.335 ' 00:07:13.335 08:28:35 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.335 --rc genhtml_branch_coverage=1 00:07:13.335 --rc genhtml_function_coverage=1 00:07:13.335 --rc genhtml_legend=1 00:07:13.335 --rc geninfo_all_blocks=1 00:07:13.335 --rc geninfo_unexecuted_blocks=1 00:07:13.335 00:07:13.335 ' 00:07:13.335 08:28:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.335 08:28:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70879 00:07:13.336 08:28:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:13.336 08:28:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70879 00:07:13.336 08:28:35 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70879 ']' 00:07:13.336 08:28:35 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.336 08:28:35 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.336 08:28:35 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.336 08:28:35 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.336 08:28:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.336 [2024-11-19 08:28:35.106990] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:13.336 [2024-11-19 08:28:35.107151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70879 ] 00:07:13.595 [2024-11-19 08:28:35.265341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.595 [2024-11-19 08:28:35.293825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.163 08:28:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.163 08:28:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.163 08:28:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:14.422 08:28:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70879 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70879 ']' 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70879 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70879 00:07:14.422 killing process with pid 70879 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70879' 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 70879 00:07:14.422 08:28:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 70879 00:07:14.989 00:07:14.989 real 0m1.806s 00:07:14.989 user 0m1.899s 00:07:14.989 sys 0m0.487s 00:07:14.989 08:28:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.989 08:28:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 ************************************ 00:07:14.989 END TEST alias_rpc 00:07:14.989 ************************************ 00:07:14.989 08:28:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:14.989 08:28:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:14.989 08:28:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.989 08:28:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.989 08:28:36 -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 ************************************ 00:07:14.989 START TEST spdkcli_tcp 00:07:14.989 ************************************ 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:14.989 * Looking for test storage... 00:07:14.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.989 08:28:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.989 --rc genhtml_branch_coverage=1 00:07:14.989 --rc genhtml_function_coverage=1 00:07:14.989 --rc genhtml_legend=1 00:07:14.989 --rc geninfo_all_blocks=1 00:07:14.989 --rc geninfo_unexecuted_blocks=1 00:07:14.989 00:07:14.989 ' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.989 --rc genhtml_branch_coverage=1 00:07:14.989 --rc genhtml_function_coverage=1 00:07:14.989 --rc genhtml_legend=1 00:07:14.989 --rc geninfo_all_blocks=1 00:07:14.989 --rc geninfo_unexecuted_blocks=1 00:07:14.989 00:07:14.989 ' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.989 --rc genhtml_branch_coverage=1 00:07:14.989 --rc genhtml_function_coverage=1 00:07:14.989 --rc genhtml_legend=1 00:07:14.989 --rc geninfo_all_blocks=1 00:07:14.989 --rc geninfo_unexecuted_blocks=1 00:07:14.989 00:07:14.989 ' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.989 --rc genhtml_branch_coverage=1 00:07:14.989 --rc genhtml_function_coverage=1 00:07:14.989 --rc genhtml_legend=1 00:07:14.989 --rc geninfo_all_blocks=1 00:07:14.989 --rc geninfo_unexecuted_blocks=1 00:07:14.989 00:07:14.989 ' 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70964 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:14.989 08:28:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70964 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70964 ']' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.989 08:28:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.248 [2024-11-19 08:28:36.968773] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:15.248 [2024-11-19 08:28:36.968910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70964 ] 00:07:15.248 [2024-11-19 08:28:37.126933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.510 [2024-11-19 08:28:37.157815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.510 [2024-11-19 08:28:37.157897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.078 08:28:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.078 08:28:37 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:16.078 08:28:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70981 00:07:16.078 08:28:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:16.078 08:28:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:16.338 [ 00:07:16.338 "bdev_malloc_delete", 00:07:16.338 "bdev_malloc_create", 00:07:16.338 "bdev_null_resize", 00:07:16.338 "bdev_null_delete", 00:07:16.338 "bdev_null_create", 00:07:16.338 "bdev_nvme_cuse_unregister", 00:07:16.338 "bdev_nvme_cuse_register", 00:07:16.338 "bdev_opal_new_user", 00:07:16.338 "bdev_opal_set_lock_state", 00:07:16.338 "bdev_opal_delete", 00:07:16.338 "bdev_opal_get_info", 00:07:16.338 "bdev_opal_create", 00:07:16.338 "bdev_nvme_opal_revert", 00:07:16.338 "bdev_nvme_opal_init", 00:07:16.338 "bdev_nvme_send_cmd", 00:07:16.338 "bdev_nvme_set_keys", 00:07:16.338 "bdev_nvme_get_path_iostat", 00:07:16.338 "bdev_nvme_get_mdns_discovery_info", 00:07:16.339 "bdev_nvme_stop_mdns_discovery", 00:07:16.339 "bdev_nvme_start_mdns_discovery", 00:07:16.339 "bdev_nvme_set_multipath_policy", 00:07:16.339 "bdev_nvme_set_preferred_path", 00:07:16.339 "bdev_nvme_get_io_paths", 00:07:16.339 "bdev_nvme_remove_error_injection", 00:07:16.339 "bdev_nvme_add_error_injection", 00:07:16.339 "bdev_nvme_get_discovery_info", 00:07:16.339 "bdev_nvme_stop_discovery", 00:07:16.339 "bdev_nvme_start_discovery", 00:07:16.339 "bdev_nvme_get_controller_health_info", 00:07:16.339 "bdev_nvme_disable_controller", 00:07:16.339 "bdev_nvme_enable_controller", 00:07:16.339 "bdev_nvme_reset_controller", 00:07:16.339 "bdev_nvme_get_transport_statistics", 00:07:16.339 "bdev_nvme_apply_firmware", 00:07:16.339 "bdev_nvme_detach_controller", 00:07:16.339 "bdev_nvme_get_controllers", 00:07:16.339 "bdev_nvme_attach_controller", 00:07:16.339 "bdev_nvme_set_hotplug", 00:07:16.339 "bdev_nvme_set_options", 00:07:16.339 "bdev_passthru_delete", 00:07:16.339 "bdev_passthru_create", 00:07:16.339 "bdev_lvol_set_parent_bdev", 00:07:16.339 "bdev_lvol_set_parent", 00:07:16.339 "bdev_lvol_check_shallow_copy", 00:07:16.339 "bdev_lvol_start_shallow_copy", 00:07:16.339 "bdev_lvol_grow_lvstore", 00:07:16.339 "bdev_lvol_get_lvols", 00:07:16.339 "bdev_lvol_get_lvstores", 00:07:16.339 "bdev_lvol_delete", 00:07:16.339 "bdev_lvol_set_read_only", 00:07:16.339 "bdev_lvol_resize", 00:07:16.339 "bdev_lvol_decouple_parent", 00:07:16.339 "bdev_lvol_inflate", 00:07:16.339 "bdev_lvol_rename", 00:07:16.339 "bdev_lvol_clone_bdev", 00:07:16.339 "bdev_lvol_clone", 00:07:16.339 "bdev_lvol_snapshot", 00:07:16.339 "bdev_lvol_create", 00:07:16.339 "bdev_lvol_delete_lvstore", 00:07:16.339 "bdev_lvol_rename_lvstore", 00:07:16.339 "bdev_lvol_create_lvstore", 00:07:16.339 "bdev_raid_set_options", 00:07:16.339 "bdev_raid_remove_base_bdev", 00:07:16.339 "bdev_raid_add_base_bdev", 00:07:16.339 "bdev_raid_delete", 00:07:16.339 "bdev_raid_create", 00:07:16.339 "bdev_raid_get_bdevs", 00:07:16.339 "bdev_error_inject_error", 00:07:16.339 "bdev_error_delete", 00:07:16.339 "bdev_error_create", 00:07:16.339 "bdev_split_delete", 00:07:16.339 "bdev_split_create", 00:07:16.339 "bdev_delay_delete", 00:07:16.339 "bdev_delay_create", 00:07:16.339 "bdev_delay_update_latency", 00:07:16.339 "bdev_zone_block_delete", 00:07:16.339 "bdev_zone_block_create", 00:07:16.339 "blobfs_create", 00:07:16.339 "blobfs_detect", 00:07:16.339 "blobfs_set_cache_size", 00:07:16.339 "bdev_xnvme_delete", 00:07:16.339 "bdev_xnvme_create", 00:07:16.339 "bdev_aio_delete", 00:07:16.339 "bdev_aio_rescan", 00:07:16.339 "bdev_aio_create", 00:07:16.339 "bdev_ftl_set_property", 00:07:16.339 "bdev_ftl_get_properties", 00:07:16.339 "bdev_ftl_get_stats", 00:07:16.339 "bdev_ftl_unmap", 00:07:16.339 "bdev_ftl_unload", 00:07:16.339 "bdev_ftl_delete", 00:07:16.339 "bdev_ftl_load", 00:07:16.339 "bdev_ftl_create", 00:07:16.339 "bdev_virtio_attach_controller", 00:07:16.339 "bdev_virtio_scsi_get_devices", 00:07:16.339 "bdev_virtio_detach_controller", 00:07:16.339 "bdev_virtio_blk_set_hotplug", 00:07:16.339 "bdev_iscsi_delete", 00:07:16.339 "bdev_iscsi_create", 00:07:16.339 "bdev_iscsi_set_options", 00:07:16.339 "accel_error_inject_error", 00:07:16.339 "ioat_scan_accel_module", 00:07:16.339 "dsa_scan_accel_module", 00:07:16.339 "iaa_scan_accel_module", 00:07:16.339 "keyring_file_remove_key", 00:07:16.339 "keyring_file_add_key", 00:07:16.339 "keyring_linux_set_options", 00:07:16.339 "fsdev_aio_delete", 00:07:16.339 "fsdev_aio_create", 00:07:16.339 "iscsi_get_histogram", 00:07:16.339 "iscsi_enable_histogram", 00:07:16.339 "iscsi_set_options", 00:07:16.339 "iscsi_get_auth_groups", 00:07:16.339 "iscsi_auth_group_remove_secret", 00:07:16.339 "iscsi_auth_group_add_secret", 00:07:16.339 "iscsi_delete_auth_group", 00:07:16.339 "iscsi_create_auth_group", 00:07:16.339 "iscsi_set_discovery_auth", 00:07:16.339 "iscsi_get_options", 00:07:16.339 "iscsi_target_node_request_logout", 00:07:16.339 "iscsi_target_node_set_redirect", 00:07:16.339 "iscsi_target_node_set_auth", 00:07:16.339 "iscsi_target_node_add_lun", 00:07:16.339 "iscsi_get_stats", 00:07:16.339 "iscsi_get_connections", 00:07:16.339 "iscsi_portal_group_set_auth", 00:07:16.339 "iscsi_start_portal_group", 00:07:16.339 "iscsi_delete_portal_group", 00:07:16.339 "iscsi_create_portal_group", 00:07:16.339 "iscsi_get_portal_groups", 00:07:16.339 "iscsi_delete_target_node", 00:07:16.339 "iscsi_target_node_remove_pg_ig_maps", 00:07:16.339 "iscsi_target_node_add_pg_ig_maps", 00:07:16.339 "iscsi_create_target_node", 00:07:16.339 "iscsi_get_target_nodes", 00:07:16.339 "iscsi_delete_initiator_group", 00:07:16.339 "iscsi_initiator_group_remove_initiators", 00:07:16.339 "iscsi_initiator_group_add_initiators", 00:07:16.339 "iscsi_create_initiator_group", 00:07:16.339 "iscsi_get_initiator_groups", 00:07:16.339 "nvmf_set_crdt", 00:07:16.339 "nvmf_set_config", 00:07:16.339 "nvmf_set_max_subsystems", 00:07:16.339 "nvmf_stop_mdns_prr", 00:07:16.339 "nvmf_publish_mdns_prr", 00:07:16.339 "nvmf_subsystem_get_listeners", 00:07:16.339 "nvmf_subsystem_get_qpairs", 00:07:16.339 "nvmf_subsystem_get_controllers", 00:07:16.339 "nvmf_get_stats", 00:07:16.339 "nvmf_get_transports", 00:07:16.339 "nvmf_create_transport", 00:07:16.339 "nvmf_get_targets", 00:07:16.339 "nvmf_delete_target", 00:07:16.339 "nvmf_create_target", 00:07:16.339 "nvmf_subsystem_allow_any_host", 00:07:16.339 "nvmf_subsystem_set_keys", 00:07:16.339 "nvmf_subsystem_remove_host", 00:07:16.339 "nvmf_subsystem_add_host", 00:07:16.339 "nvmf_ns_remove_host", 00:07:16.339 "nvmf_ns_add_host", 00:07:16.339 "nvmf_subsystem_remove_ns", 00:07:16.339 "nvmf_subsystem_set_ns_ana_group", 00:07:16.339 "nvmf_subsystem_add_ns", 00:07:16.339 "nvmf_subsystem_listener_set_ana_state", 00:07:16.339 "nvmf_discovery_get_referrals", 00:07:16.339 "nvmf_discovery_remove_referral", 00:07:16.339 "nvmf_discovery_add_referral", 00:07:16.339 "nvmf_subsystem_remove_listener", 00:07:16.339 "nvmf_subsystem_add_listener", 00:07:16.339 "nvmf_delete_subsystem", 00:07:16.339 "nvmf_create_subsystem", 00:07:16.339 "nvmf_get_subsystems", 00:07:16.339 "env_dpdk_get_mem_stats", 00:07:16.339 "nbd_get_disks", 00:07:16.339 "nbd_stop_disk", 00:07:16.339 "nbd_start_disk", 00:07:16.339 "ublk_recover_disk", 00:07:16.339 "ublk_get_disks", 00:07:16.339 "ublk_stop_disk", 00:07:16.339 "ublk_start_disk", 00:07:16.339 "ublk_destroy_target", 00:07:16.339 "ublk_create_target", 00:07:16.339 "virtio_blk_create_transport", 00:07:16.339 "virtio_blk_get_transports", 00:07:16.339 "vhost_controller_set_coalescing", 00:07:16.339 "vhost_get_controllers", 00:07:16.339 "vhost_delete_controller", 00:07:16.339 "vhost_create_blk_controller", 00:07:16.339 "vhost_scsi_controller_remove_target", 00:07:16.339 "vhost_scsi_controller_add_target", 00:07:16.339 "vhost_start_scsi_controller", 00:07:16.339 "vhost_create_scsi_controller", 00:07:16.339 "thread_set_cpumask", 00:07:16.339 "scheduler_set_options", 00:07:16.339 "framework_get_governor", 00:07:16.339 "framework_get_scheduler", 00:07:16.339 "framework_set_scheduler", 00:07:16.339 "framework_get_reactors", 00:07:16.339 "thread_get_io_channels", 00:07:16.339 "thread_get_pollers", 00:07:16.339 "thread_get_stats", 00:07:16.339 "framework_monitor_context_switch", 00:07:16.339 "spdk_kill_instance", 00:07:16.339 "log_enable_timestamps", 00:07:16.339 "log_get_flags", 00:07:16.339 "log_clear_flag", 00:07:16.339 "log_set_flag", 00:07:16.339 "log_get_level", 00:07:16.339 "log_set_level", 00:07:16.339 "log_get_print_level", 00:07:16.339 "log_set_print_level", 00:07:16.339 "framework_enable_cpumask_locks", 00:07:16.339 "framework_disable_cpumask_locks", 00:07:16.339 "framework_wait_init", 00:07:16.339 "framework_start_init", 00:07:16.339 "scsi_get_devices", 00:07:16.339 "bdev_get_histogram", 00:07:16.339 "bdev_enable_histogram", 00:07:16.339 "bdev_set_qos_limit", 00:07:16.339 "bdev_set_qd_sampling_period", 00:07:16.339 "bdev_get_bdevs", 00:07:16.339 "bdev_reset_iostat", 00:07:16.339 "bdev_get_iostat", 00:07:16.339 "bdev_examine", 00:07:16.339 "bdev_wait_for_examine", 00:07:16.339 "bdev_set_options", 00:07:16.339 "accel_get_stats", 00:07:16.339 "accel_set_options", 00:07:16.339 "accel_set_driver", 00:07:16.339 "accel_crypto_key_destroy", 00:07:16.339 "accel_crypto_keys_get", 00:07:16.339 "accel_crypto_key_create", 00:07:16.339 "accel_assign_opc", 00:07:16.339 "accel_get_module_info", 00:07:16.339 "accel_get_opc_assignments", 00:07:16.339 "vmd_rescan", 00:07:16.339 "vmd_remove_device", 00:07:16.339 "vmd_enable", 00:07:16.339 "sock_get_default_impl", 00:07:16.339 "sock_set_default_impl", 00:07:16.339 "sock_impl_set_options", 00:07:16.339 "sock_impl_get_options", 00:07:16.339 "iobuf_get_stats", 00:07:16.339 "iobuf_set_options", 00:07:16.339 "keyring_get_keys", 00:07:16.339 "framework_get_pci_devices", 00:07:16.339 "framework_get_config", 00:07:16.339 "framework_get_subsystems", 00:07:16.339 "fsdev_set_opts", 00:07:16.339 "fsdev_get_opts", 00:07:16.339 "trace_get_info", 00:07:16.339 "trace_get_tpoint_group_mask", 00:07:16.339 "trace_disable_tpoint_group", 00:07:16.339 "trace_enable_tpoint_group", 00:07:16.339 "trace_clear_tpoint_mask", 00:07:16.339 "trace_set_tpoint_mask", 00:07:16.340 "notify_get_notifications", 00:07:16.340 "notify_get_types", 00:07:16.340 "spdk_get_version", 00:07:16.340 "rpc_get_methods" 00:07:16.340 ] 00:07:16.340 08:28:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.340 08:28:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:16.340 08:28:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70964 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70964 ']' 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70964 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70964 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.340 killing process with pid 70964 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70964' 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70964 00:07:16.340 08:28:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70964 00:07:16.910 00:07:16.910 real 0m1.851s 00:07:16.910 user 0m3.173s 00:07:16.910 sys 0m0.571s 00:07:16.910 08:28:38 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.910 08:28:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.910 ************************************ 00:07:16.910 END TEST spdkcli_tcp 00:07:16.910 ************************************ 00:07:16.910 08:28:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:16.910 08:28:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.910 08:28:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.910 08:28:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.910 ************************************ 00:07:16.910 START TEST dpdk_mem_utility 00:07:16.910 ************************************ 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:16.910 * Looking for test storage... 00:07:16.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:16.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.910 08:28:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.910 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.910 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.911 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.911 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.911 --rc genhtml_branch_coverage=1 00:07:16.911 --rc genhtml_function_coverage=1 00:07:16.911 --rc genhtml_legend=1 00:07:16.911 --rc geninfo_all_blocks=1 00:07:16.911 --rc geninfo_unexecuted_blocks=1 00:07:16.911 00:07:16.911 ' 00:07:16.911 08:28:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:16.911 08:28:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71064 00:07:16.911 08:28:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71064 00:07:16.911 08:28:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71064 ']' 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.911 08:28:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:17.170 [2024-11-19 08:28:38.864255] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:17.171 [2024-11-19 08:28:38.864400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:07:17.171 [2024-11-19 08:28:39.021491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.171 [2024-11-19 08:28:39.050606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.110 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.110 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:18.110 08:28:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:18.110 08:28:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:18.110 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.110 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.110 { 00:07:18.110 "filename": "/tmp/spdk_mem_dump.txt" 00:07:18.110 } 00:07:18.110 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.110 08:28:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:18.110 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:18.110 1 heaps totaling size 810.000000 MiB 00:07:18.110 size: 810.000000 MiB heap id: 0 00:07:18.110 end heaps---------- 00:07:18.110 9 mempools totaling size 595.772034 MiB 00:07:18.110 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:18.110 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:18.110 size: 92.545471 MiB name: bdev_io_71064 00:07:18.110 size: 50.003479 MiB name: msgpool_71064 00:07:18.110 size: 36.509338 MiB name: fsdev_io_71064 00:07:18.110 size: 21.763794 MiB name: PDU_Pool 00:07:18.110 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:18.110 size: 4.133484 MiB name: evtpool_71064 00:07:18.110 size: 0.026123 MiB name: Session_Pool 00:07:18.110 end mempools------- 00:07:18.110 6 memzones totaling size 4.142822 MiB 00:07:18.110 size: 1.000366 MiB name: RG_ring_0_71064 00:07:18.110 size: 1.000366 MiB name: RG_ring_1_71064 00:07:18.110 size: 1.000366 MiB name: RG_ring_4_71064 00:07:18.110 size: 1.000366 MiB name: RG_ring_5_71064 00:07:18.110 size: 0.125366 MiB name: RG_ring_2_71064 00:07:18.110 size: 0.015991 MiB name: RG_ring_3_71064 00:07:18.110 end memzones------- 00:07:18.110 08:28:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:18.110 heap id: 0 total size: 810.000000 MiB number of busy elements: 319 number of free elements: 15 00:07:18.110 list of free elements. size: 10.812134 MiB 00:07:18.110 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:18.110 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:18.110 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:18.111 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:18.111 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:18.111 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:18.111 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:18.111 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:18.111 element at address: 0x20001a600000 with size: 0.566406 MiB 00:07:18.111 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:18.111 element at address: 0x200000c00000 with size: 0.487000 MiB 00:07:18.111 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:18.111 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:18.111 element at address: 0x200027a00000 with size: 0.395935 MiB 00:07:18.111 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:18.111 list of standard malloc elements. size: 199.268982 MiB 00:07:18.111 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:18.111 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:18.111 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:18.111 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:18.111 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:18.111 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:18.111 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:18.111 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:18.111 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:18.111 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:18.111 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:18.111 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691000 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691180 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691240 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691300 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691480 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691540 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691600 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691780 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691840 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691900 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692080 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692140 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692200 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692380 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692440 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692500 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692680 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692740 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692800 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692980 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693040 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693100 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693280 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693340 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693400 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693580 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693640 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693700 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693880 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693940 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694000 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694180 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694240 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694300 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694480 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694540 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694600 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694780 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694840 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694900 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a695080 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a695140 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a695200 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:18.112 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a65680 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c280 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:07:18.112 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:18.113 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:18.113 list of memzone associated elements. size: 599.918884 MiB 00:07:18.113 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:18.113 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:18.113 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:18.113 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:18.113 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:18.113 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71064_0 00:07:18.113 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:18.113 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71064_0 00:07:18.113 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:18.113 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71064_0 00:07:18.113 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:18.113 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:18.113 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:18.113 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:18.113 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:18.113 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71064_0 00:07:18.113 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:18.113 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71064 00:07:18.113 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:18.113 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71064 00:07:18.113 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:18.113 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:18.113 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:18.113 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:18.113 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:18.113 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:18.113 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:18.113 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:18.113 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:18.113 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71064 00:07:18.113 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:18.113 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71064 00:07:18.113 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:18.113 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71064 00:07:18.113 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:18.113 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71064 00:07:18.113 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:18.113 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71064 00:07:18.113 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:18.113 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71064 00:07:18.113 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:18.113 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:18.113 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:18.113 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:18.113 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:18.113 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:18.113 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:18.113 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71064 00:07:18.113 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:18.113 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71064 00:07:18.113 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:18.113 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:18.113 element at address: 0x200027a65740 with size: 0.023743 MiB 00:07:18.113 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:18.113 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:18.113 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71064 00:07:18.113 element at address: 0x200027a6b880 with size: 0.002441 MiB 00:07:18.113 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:18.113 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:18.113 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71064 00:07:18.113 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:18.113 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71064 00:07:18.113 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:18.113 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71064 00:07:18.113 element at address: 0x200027a6c340 with size: 0.000305 MiB 00:07:18.113 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:18.113 08:28:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:18.113 08:28:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71064 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71064 ']' 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71064 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71064 00:07:18.113 killing process with pid 71064 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71064' 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71064 00:07:18.113 08:28:39 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71064 00:07:18.437 00:07:18.437 real 0m1.707s 00:07:18.437 user 0m1.725s 00:07:18.437 sys 0m0.486s 00:07:18.437 08:28:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.437 08:28:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.437 ************************************ 00:07:18.437 END TEST dpdk_mem_utility 00:07:18.437 ************************************ 00:07:18.437 08:28:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:18.437 08:28:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.437 08:28:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.437 08:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:18.709 ************************************ 00:07:18.709 START TEST event 00:07:18.709 ************************************ 00:07:18.709 08:28:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:18.709 * Looking for test storage... 00:07:18.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:18.709 08:28:40 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.709 08:28:40 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.709 08:28:40 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.709 08:28:40 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.710 08:28:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.710 08:28:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.710 08:28:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.710 08:28:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.710 08:28:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.710 08:28:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.710 08:28:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.710 08:28:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.710 08:28:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.710 08:28:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.710 08:28:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.710 08:28:40 event -- scripts/common.sh@344 -- # case "$op" in 00:07:18.710 08:28:40 event -- scripts/common.sh@345 -- # : 1 00:07:18.710 08:28:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.710 08:28:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.710 08:28:40 event -- scripts/common.sh@365 -- # decimal 1 00:07:18.710 08:28:40 event -- scripts/common.sh@353 -- # local d=1 00:07:18.710 08:28:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.710 08:28:40 event -- scripts/common.sh@355 -- # echo 1 00:07:18.710 08:28:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.710 08:28:40 event -- scripts/common.sh@366 -- # decimal 2 00:07:18.710 08:28:40 event -- scripts/common.sh@353 -- # local d=2 00:07:18.710 08:28:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.710 08:28:40 event -- scripts/common.sh@355 -- # echo 2 00:07:18.710 08:28:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.710 08:28:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.710 08:28:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.710 08:28:40 event -- scripts/common.sh@368 -- # return 0 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.710 --rc genhtml_branch_coverage=1 00:07:18.710 --rc genhtml_function_coverage=1 00:07:18.710 --rc genhtml_legend=1 00:07:18.710 --rc geninfo_all_blocks=1 00:07:18.710 --rc geninfo_unexecuted_blocks=1 00:07:18.710 00:07:18.710 ' 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.710 --rc genhtml_branch_coverage=1 00:07:18.710 --rc genhtml_function_coverage=1 00:07:18.710 --rc genhtml_legend=1 00:07:18.710 --rc geninfo_all_blocks=1 00:07:18.710 --rc geninfo_unexecuted_blocks=1 00:07:18.710 00:07:18.710 ' 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.710 --rc genhtml_branch_coverage=1 00:07:18.710 --rc genhtml_function_coverage=1 00:07:18.710 --rc genhtml_legend=1 00:07:18.710 --rc geninfo_all_blocks=1 00:07:18.710 --rc geninfo_unexecuted_blocks=1 00:07:18.710 00:07:18.710 ' 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.710 --rc genhtml_branch_coverage=1 00:07:18.710 --rc genhtml_function_coverage=1 00:07:18.710 --rc genhtml_legend=1 00:07:18.710 --rc geninfo_all_blocks=1 00:07:18.710 --rc geninfo_unexecuted_blocks=1 00:07:18.710 00:07:18.710 ' 00:07:18.710 08:28:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:18.710 08:28:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:18.710 08:28:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:18.710 08:28:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.710 08:28:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.710 ************************************ 00:07:18.710 START TEST event_perf 00:07:18.710 ************************************ 00:07:18.710 08:28:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:18.710 Running I/O for 1 seconds...[2024-11-19 08:28:40.610929] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:18.710 [2024-11-19 08:28:40.611067] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71150 ] 00:07:18.969 [2024-11-19 08:28:40.752597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.969 [2024-11-19 08:28:40.784986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.969 [2024-11-19 08:28:40.785162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.969 [2024-11-19 08:28:40.785206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.969 Running I/O for 1 seconds...[2024-11-19 08:28:40.785218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.347 00:07:20.347 lcore 0: 194353 00:07:20.347 lcore 1: 194350 00:07:20.347 lcore 2: 194350 00:07:20.347 lcore 3: 194351 00:07:20.347 done. 00:07:20.347 00:07:20.347 real 0m1.282s 00:07:20.347 user 0m4.080s 00:07:20.347 sys 0m0.080s 00:07:20.347 08:28:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.347 08:28:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.347 ************************************ 00:07:20.347 END TEST event_perf 00:07:20.347 ************************************ 00:07:20.347 08:28:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.347 08:28:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:20.347 08:28:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.347 08:28:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.347 ************************************ 00:07:20.347 START TEST event_reactor 00:07:20.347 ************************************ 00:07:20.347 08:28:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.347 [2024-11-19 08:28:41.957885] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:20.347 [2024-11-19 08:28:41.958025] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71186 ] 00:07:20.347 [2024-11-19 08:28:42.114187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.347 [2024-11-19 08:28:42.142413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.293 test_start 00:07:21.293 oneshot 00:07:21.293 tick 100 00:07:21.293 tick 100 00:07:21.293 tick 250 00:07:21.293 tick 100 00:07:21.293 tick 100 00:07:21.293 tick 100 00:07:21.293 tick 250 00:07:21.293 tick 500 00:07:21.293 tick 100 00:07:21.293 tick 100 00:07:21.293 tick 250 00:07:21.293 tick 100 00:07:21.293 tick 100 00:07:21.293 test_end 00:07:21.565 00:07:21.565 real 0m1.285s 00:07:21.565 user 0m1.100s 00:07:21.565 sys 0m0.077s 00:07:21.565 08:28:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.565 08:28:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:21.565 ************************************ 00:07:21.565 END TEST event_reactor 00:07:21.565 ************************************ 00:07:21.565 08:28:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:21.565 08:28:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:21.565 08:28:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.565 08:28:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.565 ************************************ 00:07:21.565 START TEST event_reactor_perf 00:07:21.565 ************************************ 00:07:21.565 08:28:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:21.565 [2024-11-19 08:28:43.305078] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:21.565 [2024-11-19 08:28:43.305234] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71217 ] 00:07:21.565 [2024-11-19 08:28:43.459896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.824 [2024-11-19 08:28:43.488661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.762 test_start 00:07:22.762 test_end 00:07:22.762 Performance: 367178 events per second 00:07:22.762 00:07:22.762 real 0m1.281s 00:07:22.762 user 0m1.099s 00:07:22.762 sys 0m0.075s 00:07:22.762 08:28:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.762 08:28:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.762 ************************************ 00:07:22.762 END TEST event_reactor_perf 00:07:22.762 ************************************ 00:07:22.762 08:28:44 event -- event/event.sh@49 -- # uname -s 00:07:22.762 08:28:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:22.762 08:28:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:22.762 08:28:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.762 08:28:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.762 08:28:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.762 ************************************ 00:07:22.762 START TEST event_scheduler 00:07:22.762 ************************************ 00:07:22.762 08:28:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:23.021 * Looking for test storage... 00:07:23.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:23.021 08:28:44 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.021 08:28:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.021 08:28:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.021 08:28:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.021 08:28:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.022 08:28:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.022 --rc genhtml_branch_coverage=1 00:07:23.022 --rc genhtml_function_coverage=1 00:07:23.022 --rc genhtml_legend=1 00:07:23.022 --rc geninfo_all_blocks=1 00:07:23.022 --rc geninfo_unexecuted_blocks=1 00:07:23.022 00:07:23.022 ' 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.022 --rc genhtml_branch_coverage=1 00:07:23.022 --rc genhtml_function_coverage=1 00:07:23.022 --rc genhtml_legend=1 00:07:23.022 --rc geninfo_all_blocks=1 00:07:23.022 --rc geninfo_unexecuted_blocks=1 00:07:23.022 00:07:23.022 ' 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.022 --rc genhtml_branch_coverage=1 00:07:23.022 --rc genhtml_function_coverage=1 00:07:23.022 --rc genhtml_legend=1 00:07:23.022 --rc geninfo_all_blocks=1 00:07:23.022 --rc geninfo_unexecuted_blocks=1 00:07:23.022 00:07:23.022 ' 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.022 --rc genhtml_branch_coverage=1 00:07:23.022 --rc genhtml_function_coverage=1 00:07:23.022 --rc genhtml_legend=1 00:07:23.022 --rc geninfo_all_blocks=1 00:07:23.022 --rc geninfo_unexecuted_blocks=1 00:07:23.022 00:07:23.022 ' 00:07:23.022 08:28:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:23.022 08:28:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:23.022 08:28:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71293 00:07:23.022 08:28:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.022 08:28:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71293 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71293 ']' 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.022 08:28:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:23.022 [2024-11-19 08:28:44.906047] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:23.022 [2024-11-19 08:28:44.906180] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:07:23.281 [2024-11-19 08:28:45.066025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.281 [2024-11-19 08:28:45.099065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.281 [2024-11-19 08:28:45.099362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.281 [2024-11-19 08:28:45.099293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.281 [2024-11-19 08:28:45.099482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:24.219 08:28:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.219 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.219 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.219 POWER: Cannot set governor of lcore 0 to performance 00:07:24.219 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.219 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.219 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:24.219 POWER: Unable to set Power Management Environment for lcore 0 00:07:24.219 [2024-11-19 08:28:45.827833] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:24.219 [2024-11-19 08:28:45.827881] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:24.219 [2024-11-19 08:28:45.827910] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:24.219 [2024-11-19 08:28:45.827990] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:24.219 [2024-11-19 08:28:45.828006] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:24.219 [2024-11-19 08:28:45.828021] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.219 08:28:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 [2024-11-19 08:28:45.907801] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.219 08:28:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.219 08:28:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 ************************************ 00:07:24.219 START TEST scheduler_create_thread 00:07:24.219 ************************************ 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 2 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 3 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 4 00:07:24.219 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 5 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 6 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 7 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.220 08:28:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 8 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 9 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.220 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.845 10 00:07:24.845 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.845 08:28:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:24.845 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.845 08:28:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.225 08:28:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.225 08:28:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:26.226 08:28:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:26.226 08:28:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.226 08:28:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.795 08:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.795 08:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:26.795 08:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.795 08:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.766 08:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.766 08:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:27.767 08:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:27.767 08:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.767 08:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.335 08:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.335 00:07:28.335 real 0m4.211s 00:07:28.335 user 0m0.025s 00:07:28.335 ************************************ 00:07:28.335 END TEST scheduler_create_thread 00:07:28.335 ************************************ 00:07:28.335 sys 0m0.010s 00:07:28.335 08:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.335 08:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.335 08:28:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:28.335 08:28:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71293 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71293 ']' 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71293 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71293 00:07:28.335 killing process with pid 71293 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71293' 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71293 00:07:28.335 08:28:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71293 00:07:28.595 [2024-11-19 08:28:50.409045] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:28.854 ************************************ 00:07:28.854 END TEST event_scheduler 00:07:28.855 ************************************ 00:07:28.855 00:07:28.855 real 0m6.056s 00:07:28.855 user 0m13.353s 00:07:28.855 sys 0m0.462s 00:07:28.855 08:28:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.855 08:28:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:28.855 08:28:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:28.855 08:28:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:28.855 08:28:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.855 08:28:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.855 08:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.855 ************************************ 00:07:28.855 START TEST app_repeat 00:07:28.855 ************************************ 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:28.855 Process app_repeat pid: 71405 00:07:28.855 spdk_app_start Round 0 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71405 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71405' 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71405 /var/tmp/spdk-nbd.sock 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71405 ']' 00:07:28.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.855 08:28:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.855 08:28:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.114 [2024-11-19 08:28:50.797370] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:29.114 [2024-11-19 08:28:50.797516] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:07:29.114 [2024-11-19 08:28:50.953065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.114 [2024-11-19 08:28:50.983437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.114 [2024-11-19 08:28:50.983528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.052 08:28:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.052 08:28:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:30.052 08:28:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.052 Malloc0 00:07:30.053 08:28:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.313 Malloc1 00:07:30.313 08:28:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.313 08:28:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:30.573 /dev/nbd0 00:07:30.573 08:28:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:30.573 08:28:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.573 1+0 records in 00:07:30.573 1+0 records out 00:07:30.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405292 s, 10.1 MB/s 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:30.573 08:28:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:30.573 08:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.573 08:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.573 08:28:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:30.834 /dev/nbd1 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.834 1+0 records in 00:07:30.834 1+0 records out 00:07:30.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432326 s, 9.5 MB/s 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:30.834 08:28:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.834 08:28:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.093 08:28:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:31.093 { 00:07:31.093 "nbd_device": "/dev/nbd0", 00:07:31.093 "bdev_name": "Malloc0" 00:07:31.093 }, 00:07:31.093 { 00:07:31.093 "nbd_device": "/dev/nbd1", 00:07:31.093 "bdev_name": "Malloc1" 00:07:31.093 } 00:07:31.093 ]' 00:07:31.093 08:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:31.093 { 00:07:31.093 "nbd_device": "/dev/nbd0", 00:07:31.093 "bdev_name": "Malloc0" 00:07:31.093 }, 00:07:31.093 { 00:07:31.093 "nbd_device": "/dev/nbd1", 00:07:31.093 "bdev_name": "Malloc1" 00:07:31.093 } 00:07:31.093 ]' 00:07:31.093 08:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.093 08:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:31.093 /dev/nbd1' 00:07:31.353 08:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:31.353 /dev/nbd1' 00:07:31.353 08:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.353 08:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:31.353 256+0 records in 00:07:31.353 256+0 records out 00:07:31.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141626 s, 74.0 MB/s 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:31.353 256+0 records in 00:07:31.353 256+0 records out 00:07:31.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230227 s, 45.5 MB/s 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:31.353 256+0 records in 00:07:31.353 256+0 records out 00:07:31.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215379 s, 48.7 MB/s 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:31.353 08:28:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.354 08:28:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.354 08:28:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.354 08:28:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:31.354 08:28:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.354 08:28:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.613 08:28:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.873 08:28:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.181 08:28:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.181 08:28:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:32.440 08:28:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.440 [2024-11-19 08:28:54.271590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.440 [2024-11-19 08:28:54.301459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.440 [2024-11-19 08:28:54.301463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.698 [2024-11-19 08:28:54.345498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:32.698 [2024-11-19 08:28:54.345586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:35.237 08:28:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:35.237 spdk_app_start Round 1 00:07:35.237 08:28:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:35.237 08:28:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71405 /var/tmp/spdk-nbd.sock 00:07:35.237 08:28:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71405 ']' 00:07:35.237 08:28:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.237 08:28:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.237 08:28:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.237 08:28:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.237 08:28:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.496 08:28:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.496 08:28:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:35.496 08:28:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:35.755 Malloc0 00:07:36.015 08:28:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.015 Malloc1 00:07:36.275 08:28:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.275 08:28:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:36.275 /dev/nbd0 00:07:36.535 08:28:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:36.535 08:28:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:36.535 1+0 records in 00:07:36.535 1+0 records out 00:07:36.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044476 s, 9.2 MB/s 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.535 08:28:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:36.535 08:28:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.535 08:28:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.535 08:28:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:36.794 /dev/nbd1 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:36.794 1+0 records in 00:07:36.794 1+0 records out 00:07:36.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237971 s, 17.2 MB/s 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.794 08:28:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.794 08:28:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.052 08:28:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.052 { 00:07:37.052 "nbd_device": "/dev/nbd0", 00:07:37.052 "bdev_name": "Malloc0" 00:07:37.052 }, 00:07:37.052 { 00:07:37.052 "nbd_device": "/dev/nbd1", 00:07:37.052 "bdev_name": "Malloc1" 00:07:37.052 } 00:07:37.052 ]' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.053 { 00:07:37.053 "nbd_device": "/dev/nbd0", 00:07:37.053 "bdev_name": "Malloc0" 00:07:37.053 }, 00:07:37.053 { 00:07:37.053 "nbd_device": "/dev/nbd1", 00:07:37.053 "bdev_name": "Malloc1" 00:07:37.053 } 00:07:37.053 ]' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:37.053 /dev/nbd1' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:37.053 /dev/nbd1' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:37.053 256+0 records in 00:07:37.053 256+0 records out 00:07:37.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129287 s, 81.1 MB/s 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:37.053 256+0 records in 00:07:37.053 256+0 records out 00:07:37.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269069 s, 39.0 MB/s 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.053 08:28:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:37.312 256+0 records in 00:07:37.312 256+0 records out 00:07:37.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251533 s, 41.7 MB/s 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.312 08:28:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.593 08:28:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.852 08:28:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.111 08:28:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.112 08:28:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.112 08:28:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:38.371 08:29:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:38.371 [2024-11-19 08:29:00.264597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.631 [2024-11-19 08:29:00.291165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.631 [2024-11-19 08:29:00.291186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.631 [2024-11-19 08:29:00.336122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:38.631 [2024-11-19 08:29:00.336203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:41.923 08:29:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:41.923 spdk_app_start Round 2 00:07:41.923 08:29:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:41.923 08:29:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71405 /var/tmp/spdk-nbd.sock 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71405 ']' 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.924 08:29:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:41.924 08:29:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.924 Malloc0 00:07:41.924 08:29:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.184 Malloc1 00:07:42.184 08:29:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.184 08:29:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:42.444 /dev/nbd0 00:07:42.444 08:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:42.444 08:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.444 1+0 records in 00:07:42.444 1+0 records out 00:07:42.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292486 s, 14.0 MB/s 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.444 08:29:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:42.444 08:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.444 08:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.444 08:29:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:42.704 /dev/nbd1 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.704 1+0 records in 00:07:42.704 1+0 records out 00:07:42.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449553 s, 9.1 MB/s 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.704 08:29:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.704 08:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.964 { 00:07:42.964 "nbd_device": "/dev/nbd0", 00:07:42.964 "bdev_name": "Malloc0" 00:07:42.964 }, 00:07:42.964 { 00:07:42.964 "nbd_device": "/dev/nbd1", 00:07:42.964 "bdev_name": "Malloc1" 00:07:42.964 } 00:07:42.964 ]' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.964 { 00:07:42.964 "nbd_device": "/dev/nbd0", 00:07:42.964 "bdev_name": "Malloc0" 00:07:42.964 }, 00:07:42.964 { 00:07:42.964 "nbd_device": "/dev/nbd1", 00:07:42.964 "bdev_name": "Malloc1" 00:07:42.964 } 00:07:42.964 ]' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:42.964 /dev/nbd1' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:42.964 /dev/nbd1' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:42.964 256+0 records in 00:07:42.964 256+0 records out 00:07:42.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437313 s, 240 MB/s 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.964 08:29:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:43.224 256+0 records in 00:07:43.224 256+0 records out 00:07:43.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025963 s, 40.4 MB/s 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:43.224 256+0 records in 00:07:43.224 256+0 records out 00:07:43.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228393 s, 45.9 MB/s 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.224 08:29:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.483 08:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.742 08:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:44.002 08:29:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:44.002 08:29:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:44.261 08:29:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:44.520 [2024-11-19 08:29:06.251216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:44.520 [2024-11-19 08:29:06.282669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.520 [2024-11-19 08:29:06.282675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.520 [2024-11-19 08:29:06.327480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:44.520 [2024-11-19 08:29:06.327559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:47.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:47.814 08:29:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71405 /var/tmp/spdk-nbd.sock 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71405 ']' 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:47.814 08:29:09 event.app_repeat -- event/event.sh@39 -- # killprocess 71405 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71405 ']' 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71405 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71405 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71405' 00:07:47.814 killing process with pid 71405 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71405 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71405 00:07:47.814 spdk_app_start is called in Round 0. 00:07:47.814 Shutdown signal received, stop current app iteration 00:07:47.814 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:07:47.814 spdk_app_start is called in Round 1. 00:07:47.814 Shutdown signal received, stop current app iteration 00:07:47.814 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:07:47.814 spdk_app_start is called in Round 2. 00:07:47.814 Shutdown signal received, stop current app iteration 00:07:47.814 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:07:47.814 spdk_app_start is called in Round 3. 00:07:47.814 Shutdown signal received, stop current app iteration 00:07:47.814 08:29:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:47.814 08:29:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:47.814 00:07:47.814 real 0m18.847s 00:07:47.814 user 0m42.390s 00:07:47.814 sys 0m2.996s 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.814 08:29:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.814 ************************************ 00:07:47.814 END TEST app_repeat 00:07:47.814 ************************************ 00:07:47.814 08:29:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:47.814 08:29:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:47.814 08:29:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.814 08:29:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.814 08:29:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:47.814 ************************************ 00:07:47.814 START TEST cpu_locks 00:07:47.814 ************************************ 00:07:47.814 08:29:09 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:48.074 * Looking for test storage... 00:07:48.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.075 08:29:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.075 --rc genhtml_branch_coverage=1 00:07:48.075 --rc genhtml_function_coverage=1 00:07:48.075 --rc genhtml_legend=1 00:07:48.075 --rc geninfo_all_blocks=1 00:07:48.075 --rc geninfo_unexecuted_blocks=1 00:07:48.075 00:07:48.075 ' 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.075 --rc genhtml_branch_coverage=1 00:07:48.075 --rc genhtml_function_coverage=1 00:07:48.075 --rc genhtml_legend=1 00:07:48.075 --rc geninfo_all_blocks=1 00:07:48.075 --rc geninfo_unexecuted_blocks=1 00:07:48.075 00:07:48.075 ' 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.075 --rc genhtml_branch_coverage=1 00:07:48.075 --rc genhtml_function_coverage=1 00:07:48.075 --rc genhtml_legend=1 00:07:48.075 --rc geninfo_all_blocks=1 00:07:48.075 --rc geninfo_unexecuted_blocks=1 00:07:48.075 00:07:48.075 ' 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.075 --rc genhtml_branch_coverage=1 00:07:48.075 --rc genhtml_function_coverage=1 00:07:48.075 --rc genhtml_legend=1 00:07:48.075 --rc geninfo_all_blocks=1 00:07:48.075 --rc geninfo_unexecuted_blocks=1 00:07:48.075 00:07:48.075 ' 00:07:48.075 08:29:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:48.075 08:29:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:48.075 08:29:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:48.075 08:29:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.075 08:29:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.075 ************************************ 00:07:48.075 START TEST default_locks 00:07:48.075 ************************************ 00:07:48.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71846 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71846 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71846 ']' 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.075 08:29:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.075 [2024-11-19 08:29:09.966977] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:48.075 [2024-11-19 08:29:09.967225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71846 ] 00:07:48.334 [2024-11-19 08:29:10.120399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.334 [2024-11-19 08:29:10.150348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.272 08:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.272 08:29:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:49.272 08:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71846 00:07:49.272 08:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.272 08:29:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71846 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71846 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71846 ']' 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71846 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71846 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71846' 00:07:49.272 killing process with pid 71846 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71846 00:07:49.272 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71846 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71846 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71846 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71846 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71846 ']' 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71846) - No such process 00:07:49.841 ERROR: process (pid: 71846) is no longer running 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:49.841 00:07:49.841 real 0m1.604s 00:07:49.841 user 0m1.623s 00:07:49.841 sys 0m0.514s 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.841 08:29:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 ************************************ 00:07:49.841 END TEST default_locks 00:07:49.841 ************************************ 00:07:49.841 08:29:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:49.841 08:29:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.841 08:29:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.841 08:29:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 ************************************ 00:07:49.841 START TEST default_locks_via_rpc 00:07:49.841 ************************************ 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71899 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71899 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71899 ']' 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.841 08:29:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 [2024-11-19 08:29:11.644890] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:49.841 [2024-11-19 08:29:11.645233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71899 ] 00:07:50.100 [2024-11-19 08:29:11.799613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.100 [2024-11-19 08:29:11.828011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71899 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71899 00:07:50.673 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71899 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71899 ']' 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71899 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71899 00:07:50.961 killing process with pid 71899 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71899' 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71899 00:07:50.961 08:29:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71899 00:07:51.531 00:07:51.531 real 0m1.624s 00:07:51.531 user 0m1.629s 00:07:51.531 sys 0m0.530s 00:07:51.531 08:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.531 08:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.531 ************************************ 00:07:51.531 END TEST default_locks_via_rpc 00:07:51.531 ************************************ 00:07:51.531 08:29:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:51.531 08:29:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.531 08:29:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.531 08:29:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.531 ************************************ 00:07:51.531 START TEST non_locking_app_on_locked_coremask 00:07:51.531 ************************************ 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71951 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71951 /var/tmp/spdk.sock 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71951 ']' 00:07:51.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.531 08:29:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.531 [2024-11-19 08:29:13.333888] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:51.531 [2024-11-19 08:29:13.334050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71951 ] 00:07:51.791 [2024-11-19 08:29:13.476673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.791 [2024-11-19 08:29:13.527372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.360 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71967 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71967 /var/tmp/spdk2.sock 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71967 ']' 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.361 08:29:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.619 [2024-11-19 08:29:14.327962] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:52.619 [2024-11-19 08:29:14.328197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71967 ] 00:07:52.619 [2024-11-19 08:29:14.488919] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.619 [2024-11-19 08:29:14.489019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.878 [2024-11-19 08:29:14.546704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.475 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.475 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:53.475 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71951 00:07:53.475 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:53.475 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71951 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71951 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71951 ']' 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71951 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71951 00:07:54.057 killing process with pid 71951 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71951' 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71951 00:07:54.057 08:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71951 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71967 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71967 ']' 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71967 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71967 00:07:54.627 killing process with pid 71967 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71967' 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71967 00:07:54.627 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71967 00:07:55.196 00:07:55.196 real 0m3.636s 00:07:55.196 user 0m3.931s 00:07:55.196 sys 0m1.065s 00:07:55.196 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.196 ************************************ 00:07:55.196 END TEST non_locking_app_on_locked_coremask 00:07:55.196 ************************************ 00:07:55.196 08:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.196 08:29:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:55.196 08:29:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.196 08:29:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.196 08:29:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.196 ************************************ 00:07:55.196 START TEST locking_app_on_unlocked_coremask 00:07:55.196 ************************************ 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72025 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72025 /var/tmp/spdk.sock 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72025 ']' 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.196 08:29:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.196 [2024-11-19 08:29:17.037181] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:55.196 [2024-11-19 08:29:17.037406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72025 ] 00:07:55.465 [2024-11-19 08:29:17.176492] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.465 [2024-11-19 08:29:17.176642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.465 [2024-11-19 08:29:17.205345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72041 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72041 /var/tmp/spdk2.sock 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72041 ']' 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.047 08:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.307 [2024-11-19 08:29:17.990726] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:56.307 [2024-11-19 08:29:17.991001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72041 ] 00:07:56.307 [2024-11-19 08:29:18.153184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.307 [2024-11-19 08:29:18.209026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.244 08:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.244 08:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:57.244 08:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72041 00:07:57.244 08:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72041 00:07:57.244 08:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72025 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72025 ']' 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72025 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72025 00:07:57.502 killing process with pid 72025 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72025' 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72025 00:07:57.502 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72025 00:07:58.096 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72041 00:07:58.096 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72041 ']' 00:07:58.096 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72041 00:07:58.096 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:58.096 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.096 08:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72041 00:07:58.356 killing process with pid 72041 00:07:58.357 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.357 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.357 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72041' 00:07:58.357 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72041 00:07:58.357 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72041 00:07:58.618 00:07:58.618 real 0m3.418s 00:07:58.618 user 0m3.680s 00:07:58.618 sys 0m1.004s 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.618 ************************************ 00:07:58.618 END TEST locking_app_on_unlocked_coremask 00:07:58.618 ************************************ 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.618 08:29:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:58.618 08:29:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.618 08:29:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.618 08:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.618 ************************************ 00:07:58.618 START TEST locking_app_on_locked_coremask 00:07:58.618 ************************************ 00:07:58.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72105 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72105 /var/tmp/spdk.sock 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72105 ']' 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.618 08:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.878 [2024-11-19 08:29:20.520847] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:58.878 [2024-11-19 08:29:20.521022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72105 ] 00:07:58.878 [2024-11-19 08:29:20.680349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.878 [2024-11-19 08:29:20.707604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72115 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72115 /var/tmp/spdk2.sock 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72115 /var/tmp/spdk2.sock 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72115 /var/tmp/spdk2.sock 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72115 ']' 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.447 08:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.706 [2024-11-19 08:29:21.444996] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:59.706 [2024-11-19 08:29:21.445262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72115 ] 00:07:59.706 [2024-11-19 08:29:21.608148] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72105 has claimed it. 00:07:59.706 [2024-11-19 08:29:21.608247] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.276 ERROR: process (pid: 72115) is no longer running 00:08:00.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72115) - No such process 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72105 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72105 00:08:00.276 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72105 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72105 ']' 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72105 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72105 00:08:00.537 killing process with pid 72105 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72105' 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72105 00:08:00.537 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72105 00:08:01.113 00:08:01.113 real 0m2.335s 00:08:01.113 user 0m2.542s 00:08:01.113 sys 0m0.700s 00:08:01.113 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.113 ************************************ 00:08:01.113 END TEST locking_app_on_locked_coremask 00:08:01.113 ************************************ 00:08:01.113 08:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.113 08:29:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:01.113 08:29:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.113 08:29:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.113 08:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.113 ************************************ 00:08:01.113 START TEST locking_overlapped_coremask 00:08:01.113 ************************************ 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72168 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72168 /var/tmp/spdk.sock 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72168 ']' 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.113 08:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.113 [2024-11-19 08:29:22.913923] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:01.113 [2024-11-19 08:29:22.914201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72168 ] 00:08:01.372 [2024-11-19 08:29:23.058252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.372 [2024-11-19 08:29:23.089035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.372 [2024-11-19 08:29:23.089057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.372 [2024-11-19 08:29:23.089077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72186 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72186 /var/tmp/spdk2.sock 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72186 /var/tmp/spdk2.sock 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72186 /var/tmp/spdk2.sock 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72186 ']' 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.941 08:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.201 [2024-11-19 08:29:23.872588] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:02.201 [2024-11-19 08:29:23.873315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72186 ] 00:08:02.201 [2024-11-19 08:29:24.047686] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72168 has claimed it. 00:08:02.201 [2024-11-19 08:29:24.051774] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:02.770 ERROR: process (pid: 72186) is no longer running 00:08:02.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72186) - No such process 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72168 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72168 ']' 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72168 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72168 00:08:02.770 killing process with pid 72168 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72168' 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72168 00:08:02.770 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72168 00:08:03.030 00:08:03.030 real 0m2.102s 00:08:03.030 user 0m5.745s 00:08:03.030 sys 0m0.532s 00:08:03.030 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.030 ************************************ 00:08:03.030 END TEST locking_overlapped_coremask 00:08:03.030 ************************************ 00:08:03.030 08:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 08:29:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:03.303 08:29:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.303 08:29:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.303 08:29:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 ************************************ 00:08:03.303 START TEST locking_overlapped_coremask_via_rpc 00:08:03.303 ************************************ 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72228 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72228 /var/tmp/spdk.sock 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72228 ']' 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.303 08:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 [2024-11-19 08:29:25.073456] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:03.303 [2024-11-19 08:29:25.073685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72228 ] 00:08:03.577 [2024-11-19 08:29:25.230905] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.577 [2024-11-19 08:29:25.231099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.577 [2024-11-19 08:29:25.260068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.577 [2024-11-19 08:29:25.260160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.577 [2024-11-19 08:29:25.260259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72246 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72246 /var/tmp/spdk2.sock 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72246 ']' 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.147 08:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.147 [2024-11-19 08:29:26.029050] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:04.147 [2024-11-19 08:29:26.029365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72246 ] 00:08:04.406 [2024-11-19 08:29:26.191909] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.406 [2024-11-19 08:29:26.192013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.406 [2024-11-19 08:29:26.257352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.406 [2024-11-19 08:29:26.257435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.406 [2024-11-19 08:29:26.257540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.347 [2024-11-19 08:29:26.922902] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72228 has claimed it. 00:08:05.347 request: 00:08:05.347 { 00:08:05.347 "method": "framework_enable_cpumask_locks", 00:08:05.347 "req_id": 1 00:08:05.347 } 00:08:05.347 Got JSON-RPC error response 00:08:05.347 response: 00:08:05.347 { 00:08:05.347 "code": -32603, 00:08:05.347 "message": "Failed to claim CPU core: 2" 00:08:05.347 } 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72228 /var/tmp/spdk.sock 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72228 ']' 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.347 08:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72246 /var/tmp/spdk2.sock 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72246 ']' 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.347 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:05.608 00:08:05.608 real 0m2.412s 00:08:05.608 user 0m1.162s 00:08:05.608 sys 0m0.182s 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.608 08:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.608 ************************************ 00:08:05.608 END TEST locking_overlapped_coremask_via_rpc 00:08:05.608 ************************************ 00:08:05.608 08:29:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:05.608 08:29:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72228 ]] 00:08:05.608 08:29:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72228 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72228 ']' 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72228 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72228 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72228' 00:08:05.608 killing process with pid 72228 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72228 00:08:05.608 08:29:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72228 00:08:06.178 08:29:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72246 ]] 00:08:06.178 08:29:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72246 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72246 ']' 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72246 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72246 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72246' 00:08:06.178 killing process with pid 72246 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72246 00:08:06.178 08:29:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72246 00:08:06.438 08:29:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:06.438 08:29:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:06.438 08:29:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72228 ]] 00:08:06.438 08:29:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72228 00:08:06.438 Process with pid 72228 is not found 00:08:06.438 Process with pid 72246 is not found 00:08:06.438 08:29:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72228 ']' 00:08:06.438 08:29:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72228 00:08:06.439 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72228) - No such process 00:08:06.439 08:29:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72228 is not found' 00:08:06.439 08:29:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72246 ]] 00:08:06.439 08:29:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72246 00:08:06.439 08:29:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72246 ']' 00:08:06.439 08:29:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72246 00:08:06.439 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72246) - No such process 00:08:06.439 08:29:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72246 is not found' 00:08:06.439 08:29:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:06.439 ************************************ 00:08:06.439 END TEST cpu_locks 00:08:06.439 ************************************ 00:08:06.439 00:08:06.439 real 0m18.629s 00:08:06.439 user 0m32.143s 00:08:06.439 sys 0m5.619s 00:08:06.439 08:29:28 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.439 08:29:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:06.439 ************************************ 00:08:06.439 END TEST event 00:08:06.439 ************************************ 00:08:06.439 00:08:06.439 real 0m47.998s 00:08:06.439 user 1m34.406s 00:08:06.439 sys 0m9.707s 00:08:06.439 08:29:28 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.439 08:29:28 event -- common/autotest_common.sh@10 -- # set +x 00:08:06.698 08:29:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:06.698 08:29:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.698 08:29:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.698 08:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:06.698 ************************************ 00:08:06.698 START TEST thread 00:08:06.698 ************************************ 00:08:06.698 08:29:28 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:06.698 * Looking for test storage... 00:08:06.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:06.698 08:29:28 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.698 08:29:28 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.698 08:29:28 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.698 08:29:28 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.698 08:29:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.698 08:29:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.698 08:29:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.698 08:29:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.698 08:29:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.698 08:29:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.698 08:29:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.698 08:29:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.698 08:29:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.698 08:29:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.698 08:29:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.698 08:29:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:06.698 08:29:28 thread -- scripts/common.sh@345 -- # : 1 00:08:06.698 08:29:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.698 08:29:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.698 08:29:28 thread -- scripts/common.sh@365 -- # decimal 1 00:08:06.698 08:29:28 thread -- scripts/common.sh@353 -- # local d=1 00:08:06.698 08:29:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.698 08:29:28 thread -- scripts/common.sh@355 -- # echo 1 00:08:06.698 08:29:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.958 08:29:28 thread -- scripts/common.sh@366 -- # decimal 2 00:08:06.958 08:29:28 thread -- scripts/common.sh@353 -- # local d=2 00:08:06.958 08:29:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.958 08:29:28 thread -- scripts/common.sh@355 -- # echo 2 00:08:06.958 08:29:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.958 08:29:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.958 08:29:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.958 08:29:28 thread -- scripts/common.sh@368 -- # return 0 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.958 --rc genhtml_function_coverage=1 00:08:06.958 --rc genhtml_legend=1 00:08:06.958 --rc geninfo_all_blocks=1 00:08:06.958 --rc geninfo_unexecuted_blocks=1 00:08:06.958 00:08:06.958 ' 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.958 --rc genhtml_function_coverage=1 00:08:06.958 --rc genhtml_legend=1 00:08:06.958 --rc geninfo_all_blocks=1 00:08:06.958 --rc geninfo_unexecuted_blocks=1 00:08:06.958 00:08:06.958 ' 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.958 --rc genhtml_function_coverage=1 00:08:06.958 --rc genhtml_legend=1 00:08:06.958 --rc geninfo_all_blocks=1 00:08:06.958 --rc geninfo_unexecuted_blocks=1 00:08:06.958 00:08:06.958 ' 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.958 --rc genhtml_function_coverage=1 00:08:06.958 --rc genhtml_legend=1 00:08:06.958 --rc geninfo_all_blocks=1 00:08:06.958 --rc geninfo_unexecuted_blocks=1 00:08:06.958 00:08:06.958 ' 00:08:06.958 08:29:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.958 08:29:28 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.958 ************************************ 00:08:06.958 START TEST thread_poller_perf 00:08:06.958 ************************************ 00:08:06.958 08:29:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.958 [2024-11-19 08:29:28.663282] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:06.958 [2024-11-19 08:29:28.663510] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72374 ] 00:08:06.958 [2024-11-19 08:29:28.806002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.958 [2024-11-19 08:29:28.834170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.958 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:08.336 [2024-11-19T08:29:30.243Z] ====================================== 00:08:08.336 [2024-11-19T08:29:30.243Z] busy:2298347488 (cyc) 00:08:08.336 [2024-11-19T08:29:30.243Z] total_run_count: 387000 00:08:08.336 [2024-11-19T08:29:30.243Z] tsc_hz: 2290000000 (cyc) 00:08:08.336 [2024-11-19T08:29:30.243Z] ====================================== 00:08:08.336 [2024-11-19T08:29:30.243Z] poller_cost: 5938 (cyc), 2593 (nsec) 00:08:08.336 00:08:08.336 real 0m1.280s 00:08:08.336 user 0m1.103s 00:08:08.336 sys 0m0.071s 00:08:08.336 08:29:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.336 08:29:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.336 ************************************ 00:08:08.336 END TEST thread_poller_perf 00:08:08.336 ************************************ 00:08:08.336 08:29:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:08.336 08:29:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:08.336 08:29:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.336 08:29:29 thread -- common/autotest_common.sh@10 -- # set +x 00:08:08.336 ************************************ 00:08:08.336 START TEST thread_poller_perf 00:08:08.336 ************************************ 00:08:08.336 08:29:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:08.336 [2024-11-19 08:29:30.009219] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:08.336 [2024-11-19 08:29:30.009412] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72414 ] 00:08:08.336 [2024-11-19 08:29:30.163209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.336 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:08.336 [2024-11-19 08:29:30.192332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.743 [2024-11-19T08:29:31.650Z] ====================================== 00:08:09.743 [2024-11-19T08:29:31.650Z] busy:2293165204 (cyc) 00:08:09.743 [2024-11-19T08:29:31.650Z] total_run_count: 4702000 00:08:09.743 [2024-11-19T08:29:31.650Z] tsc_hz: 2290000000 (cyc) 00:08:09.743 [2024-11-19T08:29:31.650Z] ====================================== 00:08:09.743 [2024-11-19T08:29:31.650Z] poller_cost: 487 (cyc), 212 (nsec) 00:08:09.743 00:08:09.743 real 0m1.290s 00:08:09.743 user 0m1.111s 00:08:09.743 sys 0m0.072s 00:08:09.743 08:29:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.743 ************************************ 00:08:09.743 END TEST thread_poller_perf 00:08:09.743 ************************************ 00:08:09.743 08:29:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:09.743 08:29:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:09.743 ************************************ 00:08:09.743 END TEST thread 00:08:09.743 ************************************ 00:08:09.743 00:08:09.743 real 0m2.912s 00:08:09.743 user 0m2.363s 00:08:09.743 sys 0m0.355s 00:08:09.743 08:29:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.743 08:29:31 thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.743 08:29:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:09.743 08:29:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:09.743 08:29:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.743 08:29:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.743 08:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.743 ************************************ 00:08:09.743 START TEST app_cmdline 00:08:09.743 ************************************ 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:09.743 * Looking for test storage... 00:08:09.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.743 08:29:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.743 --rc genhtml_branch_coverage=1 00:08:09.743 --rc genhtml_function_coverage=1 00:08:09.743 --rc genhtml_legend=1 00:08:09.743 --rc geninfo_all_blocks=1 00:08:09.743 --rc geninfo_unexecuted_blocks=1 00:08:09.743 00:08:09.743 ' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.743 --rc genhtml_branch_coverage=1 00:08:09.743 --rc genhtml_function_coverage=1 00:08:09.743 --rc genhtml_legend=1 00:08:09.743 --rc geninfo_all_blocks=1 00:08:09.743 --rc geninfo_unexecuted_blocks=1 00:08:09.743 00:08:09.743 ' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.743 --rc genhtml_branch_coverage=1 00:08:09.743 --rc genhtml_function_coverage=1 00:08:09.743 --rc genhtml_legend=1 00:08:09.743 --rc geninfo_all_blocks=1 00:08:09.743 --rc geninfo_unexecuted_blocks=1 00:08:09.743 00:08:09.743 ' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.743 --rc genhtml_branch_coverage=1 00:08:09.743 --rc genhtml_function_coverage=1 00:08:09.743 --rc genhtml_legend=1 00:08:09.743 --rc geninfo_all_blocks=1 00:08:09.743 --rc geninfo_unexecuted_blocks=1 00:08:09.743 00:08:09.743 ' 00:08:09.743 08:29:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:09.743 08:29:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:09.743 08:29:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72493 00:08:09.743 08:29:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72493 00:08:09.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 72493 ']' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.743 08:29:31 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.744 08:29:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:10.003 [2024-11-19 08:29:31.659918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:10.003 [2024-11-19 08:29:31.660151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:08:10.003 [2024-11-19 08:29:31.818110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.003 [2024-11-19 08:29:31.846077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:10.938 { 00:08:10.938 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:08:10.938 "fields": { 00:08:10.938 "major": 25, 00:08:10.938 "minor": 1, 00:08:10.938 "patch": 0, 00:08:10.938 "suffix": "-pre", 00:08:10.938 "commit": "d47eb51c9" 00:08:10.938 } 00:08:10.938 } 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:10.938 08:29:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.938 08:29:32 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.939 08:29:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.939 08:29:32 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.939 08:29:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.939 08:29:32 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.939 08:29:32 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:10.939 08:29:32 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:11.198 request: 00:08:11.198 { 00:08:11.198 "method": "env_dpdk_get_mem_stats", 00:08:11.198 "req_id": 1 00:08:11.198 } 00:08:11.198 Got JSON-RPC error response 00:08:11.198 response: 00:08:11.198 { 00:08:11.198 "code": -32601, 00:08:11.198 "message": "Method not found" 00:08:11.198 } 00:08:11.198 08:29:32 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:11.198 08:29:32 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.198 08:29:32 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.198 08:29:32 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.198 08:29:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72493 00:08:11.198 08:29:32 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 72493 ']' 00:08:11.198 08:29:32 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 72493 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72493 00:08:11.198 killing process with pid 72493 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72493' 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 72493 00:08:11.198 08:29:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 72493 00:08:11.767 00:08:11.767 real 0m2.029s 00:08:11.767 user 0m2.345s 00:08:11.767 sys 0m0.530s 00:08:11.767 08:29:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.767 08:29:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:11.767 ************************************ 00:08:11.767 END TEST app_cmdline 00:08:11.767 ************************************ 00:08:11.767 08:29:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:11.767 08:29:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.767 08:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.767 08:29:33 -- common/autotest_common.sh@10 -- # set +x 00:08:11.767 ************************************ 00:08:11.767 START TEST version 00:08:11.767 ************************************ 00:08:11.767 08:29:33 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:11.767 * Looking for test storage... 00:08:11.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:11.767 08:29:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.768 08:29:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.768 08:29:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.768 08:29:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.768 08:29:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.768 08:29:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.768 08:29:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.768 08:29:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.768 08:29:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.768 08:29:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.768 08:29:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.768 08:29:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.768 08:29:33 version -- scripts/common.sh@344 -- # case "$op" in 00:08:11.768 08:29:33 version -- scripts/common.sh@345 -- # : 1 00:08:11.768 08:29:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.768 08:29:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.768 08:29:33 version -- scripts/common.sh@365 -- # decimal 1 00:08:11.768 08:29:33 version -- scripts/common.sh@353 -- # local d=1 00:08:11.768 08:29:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.768 08:29:33 version -- scripts/common.sh@355 -- # echo 1 00:08:11.768 08:29:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.768 08:29:33 version -- scripts/common.sh@366 -- # decimal 2 00:08:11.768 08:29:33 version -- scripts/common.sh@353 -- # local d=2 00:08:11.768 08:29:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.768 08:29:33 version -- scripts/common.sh@355 -- # echo 2 00:08:11.768 08:29:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.768 08:29:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.768 08:29:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.768 08:29:33 version -- scripts/common.sh@368 -- # return 0 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.768 --rc genhtml_branch_coverage=1 00:08:11.768 --rc genhtml_function_coverage=1 00:08:11.768 --rc genhtml_legend=1 00:08:11.768 --rc geninfo_all_blocks=1 00:08:11.768 --rc geninfo_unexecuted_blocks=1 00:08:11.768 00:08:11.768 ' 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.768 --rc genhtml_branch_coverage=1 00:08:11.768 --rc genhtml_function_coverage=1 00:08:11.768 --rc genhtml_legend=1 00:08:11.768 --rc geninfo_all_blocks=1 00:08:11.768 --rc geninfo_unexecuted_blocks=1 00:08:11.768 00:08:11.768 ' 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.768 --rc genhtml_branch_coverage=1 00:08:11.768 --rc genhtml_function_coverage=1 00:08:11.768 --rc genhtml_legend=1 00:08:11.768 --rc geninfo_all_blocks=1 00:08:11.768 --rc geninfo_unexecuted_blocks=1 00:08:11.768 00:08:11.768 ' 00:08:11.768 08:29:33 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.768 --rc genhtml_branch_coverage=1 00:08:11.768 --rc genhtml_function_coverage=1 00:08:11.768 --rc genhtml_legend=1 00:08:11.768 --rc geninfo_all_blocks=1 00:08:11.768 --rc geninfo_unexecuted_blocks=1 00:08:11.768 00:08:11.768 ' 00:08:12.027 08:29:33 version -- app/version.sh@17 -- # get_header_version major 00:08:12.027 08:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # cut -f2 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.027 08:29:33 version -- app/version.sh@17 -- # major=25 00:08:12.027 08:29:33 version -- app/version.sh@18 -- # get_header_version minor 00:08:12.027 08:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # cut -f2 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.027 08:29:33 version -- app/version.sh@18 -- # minor=1 00:08:12.027 08:29:33 version -- app/version.sh@19 -- # get_header_version patch 00:08:12.027 08:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # cut -f2 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.027 08:29:33 version -- app/version.sh@19 -- # patch=0 00:08:12.027 08:29:33 version -- app/version.sh@20 -- # get_header_version suffix 00:08:12.027 08:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # cut -f2 00:08:12.027 08:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.027 08:29:33 version -- app/version.sh@20 -- # suffix=-pre 00:08:12.027 08:29:33 version -- app/version.sh@22 -- # version=25.1 00:08:12.027 08:29:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:12.027 08:29:33 version -- app/version.sh@28 -- # version=25.1rc0 00:08:12.027 08:29:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:12.027 08:29:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:12.027 08:29:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:12.027 08:29:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:12.027 ************************************ 00:08:12.027 END TEST version 00:08:12.027 ************************************ 00:08:12.027 00:08:12.027 real 0m0.311s 00:08:12.027 user 0m0.190s 00:08:12.027 sys 0m0.177s 00:08:12.027 08:29:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.027 08:29:33 version -- common/autotest_common.sh@10 -- # set +x 00:08:12.027 08:29:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:12.027 08:29:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:12.027 08:29:33 -- spdk/autotest.sh@194 -- # uname -s 00:08:12.028 08:29:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:12.028 08:29:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:12.028 08:29:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:12.028 08:29:33 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:12.028 08:29:33 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:12.028 08:29:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.028 08:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.028 08:29:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.028 ************************************ 00:08:12.028 START TEST blockdev_nvme 00:08:12.028 ************************************ 00:08:12.028 08:29:33 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:12.287 * Looking for test storage... 00:08:12.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:12.287 08:29:33 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.287 08:29:33 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.287 08:29:33 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.287 08:29:34 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.287 --rc genhtml_branch_coverage=1 00:08:12.287 --rc genhtml_function_coverage=1 00:08:12.287 --rc genhtml_legend=1 00:08:12.287 --rc geninfo_all_blocks=1 00:08:12.287 --rc geninfo_unexecuted_blocks=1 00:08:12.287 00:08:12.287 ' 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.287 --rc genhtml_branch_coverage=1 00:08:12.287 --rc genhtml_function_coverage=1 00:08:12.287 --rc genhtml_legend=1 00:08:12.287 --rc geninfo_all_blocks=1 00:08:12.287 --rc geninfo_unexecuted_blocks=1 00:08:12.287 00:08:12.287 ' 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.287 --rc genhtml_branch_coverage=1 00:08:12.287 --rc genhtml_function_coverage=1 00:08:12.287 --rc genhtml_legend=1 00:08:12.287 --rc geninfo_all_blocks=1 00:08:12.287 --rc geninfo_unexecuted_blocks=1 00:08:12.287 00:08:12.287 ' 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.287 --rc genhtml_branch_coverage=1 00:08:12.287 --rc genhtml_function_coverage=1 00:08:12.287 --rc genhtml_legend=1 00:08:12.287 --rc geninfo_all_blocks=1 00:08:12.287 --rc geninfo_unexecuted_blocks=1 00:08:12.287 00:08:12.287 ' 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:12.287 08:29:34 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72660 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:12.287 08:29:34 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72660 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 72660 ']' 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.287 08:29:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.287 [2024-11-19 08:29:34.168912] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:12.288 [2024-11-19 08:29:34.169579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72660 ] 00:08:12.546 [2024-11-19 08:29:34.327393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.547 [2024-11-19 08:29:34.355068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.116 08:29:35 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.116 08:29:35 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:13.116 08:29:35 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:13.116 08:29:35 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:13.116 08:29:35 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:13.116 08:29:35 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:13.116 08:29:35 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:13.376 08:29:35 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:13.376 08:29:35 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.376 08:29:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:13.634 08:29:35 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.634 08:29:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.894 08:29:35 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.894 08:29:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:13.894 08:29:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:13.895 08:29:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "475b4f78-afb1-4d1d-b026-cba7cb58082e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "475b4f78-afb1-4d1d-b026-cba7cb58082e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3f4b7987-6132-4288-ac45-d81c231d01e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3f4b7987-6132-4288-ac45-d81c231d01e0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "60acef8c-0f55-485e-90a3-15487009768d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "60acef8c-0f55-485e-90a3-15487009768d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b6e443a6-7a55-4061-aa38-fd1519d6d41c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b6e443a6-7a55-4061-aa38-fd1519d6d41c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1f19f94e-a66e-467b-9d1f-63cb0a8107c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f19f94e-a66e-467b-9d1f-63cb0a8107c7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "29313461-2a7a-4872-938c-cc4de75cbf88"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "29313461-2a7a-4872-938c-cc4de75cbf88",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:13.895 08:29:35 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:13.895 08:29:35 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:13.895 08:29:35 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:13.895 08:29:35 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72660 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 72660 ']' 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 72660 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72660 00:08:13.895 killing process with pid 72660 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72660' 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 72660 00:08:13.895 08:29:35 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 72660 00:08:14.155 08:29:36 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:14.155 08:29:36 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:14.155 08:29:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:14.155 08:29:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.155 08:29:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.155 ************************************ 00:08:14.155 START TEST bdev_hello_world 00:08:14.155 ************************************ 00:08:14.155 08:29:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:14.414 [2024-11-19 08:29:36.133690] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:14.414 [2024-11-19 08:29:36.133903] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72733 ] 00:08:14.414 [2024-11-19 08:29:36.292528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.674 [2024-11-19 08:29:36.320574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.942 [2024-11-19 08:29:36.701190] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:14.942 [2024-11-19 08:29:36.701339] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:14.942 [2024-11-19 08:29:36.701397] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:14.942 [2024-11-19 08:29:36.703731] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:14.942 [2024-11-19 08:29:36.704361] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:14.942 [2024-11-19 08:29:36.704437] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:14.942 [2024-11-19 08:29:36.704739] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:14.942 00:08:14.942 [2024-11-19 08:29:36.704803] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:15.210 00:08:15.210 real 0m0.872s 00:08:15.210 user 0m0.568s 00:08:15.210 sys 0m0.200s 00:08:15.210 ************************************ 00:08:15.210 END TEST bdev_hello_world 00:08:15.210 ************************************ 00:08:15.210 08:29:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.210 08:29:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 08:29:36 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:15.210 08:29:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.210 08:29:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.210 08:29:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 ************************************ 00:08:15.210 START TEST bdev_bounds 00:08:15.210 ************************************ 00:08:15.210 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:15.210 08:29:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72764 00:08:15.210 08:29:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:15.210 08:29:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:15.210 08:29:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72764' 00:08:15.210 Process bdevio pid: 72764 00:08:15.210 08:29:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72764 00:08:15.211 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72764 ']' 00:08:15.211 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.211 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.211 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.211 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.211 08:29:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:15.211 [2024-11-19 08:29:37.065226] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:15.211 [2024-11-19 08:29:37.065363] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72764 ] 00:08:15.469 [2024-11-19 08:29:37.223928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.469 [2024-11-19 08:29:37.252427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.469 [2024-11-19 08:29:37.252521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.469 [2024-11-19 08:29:37.252645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.406 08:29:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.406 08:29:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:16.406 08:29:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:16.406 I/O targets: 00:08:16.406 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:16.406 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:16.406 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:16.406 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:16.406 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:16.406 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:16.406 00:08:16.406 00:08:16.406 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.406 http://cunit.sourceforge.net/ 00:08:16.406 00:08:16.406 00:08:16.406 Suite: bdevio tests on: Nvme3n1 00:08:16.406 Test: blockdev write read block ...passed 00:08:16.406 Test: blockdev write zeroes read block ...passed 00:08:16.406 Test: blockdev write zeroes read no split ...passed 00:08:16.406 Test: blockdev write zeroes read split ...passed 00:08:16.406 Test: blockdev write zeroes read split partial ...passed 00:08:16.406 Test: blockdev reset ...[2024-11-19 08:29:38.041386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:16.406 passed 00:08:16.407 Test: blockdev write read 8 blocks ...[2024-11-19 08:29:38.043408] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:16.407 passed 00:08:16.407 Test: blockdev write read size > 128k ...passed 00:08:16.407 Test: blockdev write read invalid size ...passed 00:08:16.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.407 Test: blockdev write read max offset ...passed 00:08:16.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.407 Test: blockdev writev readv 8 blocks ...passed 00:08:16.407 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.407 Test: blockdev writev readv block ...passed 00:08:16.407 Test: blockdev writev readv size > 128k ...passed 00:08:16.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.407 Test: blockdev comparev and writev ...[2024-11-19 08:29:38.048467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d000a000 len:0x1000 00:08:16.407 [2024-11-19 08:29:38.048522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev nvme passthru rw ...passed 00:08:16.407 Test: blockdev nvme passthru vendor specific ...passed 00:08:16.407 Test: blockdev nvme admin passthru ...[2024-11-19 08:29:38.049173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:16.407 [2024-11-19 08:29:38.049228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev copy ...passed 00:08:16.407 Suite: bdevio tests on: Nvme2n3 00:08:16.407 Test: blockdev write read block ...passed 00:08:16.407 Test: blockdev write zeroes read block ...passed 00:08:16.407 Test: blockdev write zeroes read no split ...passed 00:08:16.407 Test: blockdev write zeroes read split ...passed 00:08:16.407 Test: blockdev write zeroes read split partial ...passed 00:08:16.407 Test: blockdev reset ...[2024-11-19 08:29:38.065848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:16.407 passed 00:08:16.407 Test: blockdev write read 8 blocks ...[2024-11-19 08:29:38.068186] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:16.407 passed 00:08:16.407 Test: blockdev write read size > 128k ...passed 00:08:16.407 Test: blockdev write read invalid size ...passed 00:08:16.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.407 Test: blockdev write read max offset ...passed 00:08:16.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.407 Test: blockdev writev readv 8 blocks ...passed 00:08:16.407 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.407 Test: blockdev writev readv block ...passed 00:08:16.407 Test: blockdev writev readv size > 128k ...passed 00:08:16.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.407 Test: blockdev comparev and writev ...[2024-11-19 08:29:38.073874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0003000 len:0x1000 00:08:16.407 [2024-11-19 08:29:38.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev nvme passthru rw ...passed 00:08:16.407 Test: blockdev nvme passthru vendor specific ...passed 00:08:16.407 Test: blockdev nvme admin passthru ...[2024-11-19 08:29:38.074667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:16.407 [2024-11-19 08:29:38.074738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev copy ...passed 00:08:16.407 Suite: bdevio tests on: Nvme2n2 00:08:16.407 Test: blockdev write read block ...passed 00:08:16.407 Test: blockdev write zeroes read block ...passed 00:08:16.407 Test: blockdev write zeroes read no split ...passed 00:08:16.407 Test: blockdev write zeroes read split ...passed 00:08:16.407 Test: blockdev write zeroes read split partial ...passed 00:08:16.407 Test: blockdev reset ...[2024-11-19 08:29:38.094367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:16.407 [2024-11-19 08:29:38.096993] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:16.407 passed 00:08:16.407 Test: blockdev write read 8 blocks ...passed 00:08:16.407 Test: blockdev write read size > 128k ...passed 00:08:16.407 Test: blockdev write read invalid size ...passed 00:08:16.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.407 Test: blockdev write read max offset ...passed 00:08:16.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.407 Test: blockdev writev readv 8 blocks ...passed 00:08:16.407 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.407 Test: blockdev writev readv block ...passed 00:08:16.407 Test: blockdev writev readv size > 128k ...passed 00:08:16.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.407 Test: blockdev comparev and writev ...[2024-11-19 08:29:38.103263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0003000 len:0x1000 00:08:16.407 [2024-11-19 08:29:38.103317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev nvme passthru rw ...passed 00:08:16.407 Test: blockdev nvme passthru vendor specific ...passed 00:08:16.407 Test: blockdev nvme admin passthru ...[2024-11-19 08:29:38.104130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:16.407 [2024-11-19 08:29:38.104185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev copy ...passed 00:08:16.407 Suite: bdevio tests on: Nvme2n1 00:08:16.407 Test: blockdev write read block ...passed 00:08:16.407 Test: blockdev write zeroes read block ...passed 00:08:16.407 Test: blockdev write zeroes read no split ...passed 00:08:16.407 Test: blockdev write zeroes read split ...passed 00:08:16.407 Test: blockdev write zeroes read split partial ...passed 00:08:16.407 Test: blockdev reset ...[2024-11-19 08:29:38.122731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:16.407 [2024-11-19 08:29:38.125266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:16.407 Test: blockdev write read 8 blocks ...uccessful. 00:08:16.407 passed 00:08:16.407 Test: blockdev write read size > 128k ...passed 00:08:16.407 Test: blockdev write read invalid size ...passed 00:08:16.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.407 Test: blockdev write read max offset ...passed 00:08:16.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.407 Test: blockdev writev readv 8 blocks ...passed 00:08:16.407 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.407 Test: blockdev writev readv block ...passed 00:08:16.407 Test: blockdev writev readv size > 128k ...passed 00:08:16.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.407 Test: blockdev comparev and writev ...[2024-11-19 08:29:38.131769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0003000 len:0x1000 00:08:16.407 [2024-11-19 08:29:38.131920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev nvme passthru rw ...passed 00:08:16.407 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:38.132860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:16.407 [2024-11-19 08:29:38.132993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:16.407 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev copy ...passed 00:08:16.407 Suite: bdevio tests on: Nvme1n1 00:08:16.407 Test: blockdev write read block ...passed 00:08:16.407 Test: blockdev write zeroes read block ...passed 00:08:16.407 Test: blockdev write zeroes read no split ...passed 00:08:16.407 Test: blockdev write zeroes read split ...passed 00:08:16.407 Test: blockdev write zeroes read split partial ...passed 00:08:16.407 Test: blockdev reset ...[2024-11-19 08:29:38.165604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:16.407 [2024-11-19 08:29:38.167635] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:16.407 passed 00:08:16.407 Test: blockdev write read 8 blocks ...passed 00:08:16.407 Test: blockdev write read size > 128k ...passed 00:08:16.407 Test: blockdev write read invalid size ...passed 00:08:16.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.407 Test: blockdev write read max offset ...passed 00:08:16.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.407 Test: blockdev writev readv 8 blocks ...passed 00:08:16.407 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.407 Test: blockdev writev readv block ...passed 00:08:16.407 Test: blockdev writev readv size > 128k ...passed 00:08:16.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.407 Test: blockdev comparev and writev ...[2024-11-19 08:29:38.175404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc636000 len:0x1000 00:08:16.407 [2024-11-19 08:29:38.175551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:16.407 passed 00:08:16.407 Test: blockdev nvme passthru rw ...passed 00:08:16.407 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:38.176624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:16.408 [2024-11-19 08:29:38.176755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:16.408 passed 00:08:16.408 Test: blockdev nvme admin passthru ...passed 00:08:16.408 Test: blockdev copy ...passed 00:08:16.408 Suite: bdevio tests on: Nvme0n1 00:08:16.408 Test: blockdev write read block ...passed 00:08:16.408 Test: blockdev write zeroes read block ...passed 00:08:16.408 Test: blockdev write zeroes read no split ...passed 00:08:16.408 Test: blockdev write zeroes read split ...passed 00:08:16.408 Test: blockdev write zeroes read split partial ...passed 00:08:16.408 Test: blockdev reset ...[2024-11-19 08:29:38.208061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:16.408 [2024-11-19 08:29:38.210025] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:08:16.408 00:08:16.408 Test: blockdev write read 8 blocks ...passed 00:08:16.408 Test: blockdev write read size > 128k ...passed 00:08:16.408 Test: blockdev write read invalid size ...passed 00:08:16.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.408 Test: blockdev write read max offset ...passed 00:08:16.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.408 Test: blockdev writev readv 8 blocks ...passed 00:08:16.408 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.408 Test: blockdev writev readv block ...passed 00:08:16.408 Test: blockdev writev readv size > 128k ...passed 00:08:16.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.408 Test: blockdev comparev and writev ...[2024-11-19 08:29:38.217075] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:16.408 separate metadata which is not supported yet. 00:08:16.408 passed 00:08:16.408 Test: blockdev nvme passthru rw ...passed 00:08:16.408 Test: blockdev nvme passthru vendor specific ...passed 00:08:16.408 Test: blockdev nvme admin passthru ...[2024-11-19 08:29:38.217868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:16.408 [2024-11-19 08:29:38.217929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:16.408 passed 00:08:16.408 Test: blockdev copy ...passed 00:08:16.408 00:08:16.408 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.408 suites 6 6 n/a 0 0 00:08:16.408 tests 138 138 138 0 0 00:08:16.408 asserts 893 893 893 0 n/a 00:08:16.408 00:08:16.408 Elapsed time = 0.448 seconds 00:08:16.408 0 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72764 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72764 ']' 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72764 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72764 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72764' 00:08:16.408 killing process with pid 72764 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72764 00:08:16.408 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72764 00:08:16.667 08:29:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:16.667 00:08:16.667 real 0m1.471s 00:08:16.667 user 0m3.693s 00:08:16.667 sys 0m0.349s 00:08:16.667 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.667 08:29:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:16.667 ************************************ 00:08:16.667 END TEST bdev_bounds 00:08:16.667 ************************************ 00:08:16.667 08:29:38 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:16.667 08:29:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.667 08:29:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.667 08:29:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.667 ************************************ 00:08:16.667 START TEST bdev_nbd 00:08:16.667 ************************************ 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72812 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:16.667 08:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72812 /var/tmp/spdk-nbd.sock 00:08:16.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.668 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72812 ']' 00:08:16.668 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.668 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.668 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.668 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.668 08:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:16.928 [2024-11-19 08:29:38.614597] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:16.928 [2024-11-19 08:29:38.614898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.928 [2024-11-19 08:29:38.774653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.928 [2024-11-19 08:29:38.804664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.900 1+0 records in 00:08:17.900 1+0 records out 00:08:17.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044238 s, 9.3 MB/s 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:17.900 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.171 1+0 records in 00:08:18.171 1+0 records out 00:08:18.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531605 s, 7.7 MB/s 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:18.171 08:29:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.171 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.171 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.431 1+0 records in 00:08:18.431 1+0 records out 00:08:18.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771232 s, 5.3 MB/s 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.431 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.692 1+0 records in 00:08:18.692 1+0 records out 00:08:18.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694787 s, 5.9 MB/s 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.692 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.952 1+0 records in 00:08:18.952 1+0 records out 00:08:18.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649956 s, 6.3 MB/s 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:18.952 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.953 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.953 08:29:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:18.953 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.953 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.953 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:19.212 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:19.212 08:29:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.212 1+0 records in 00:08:19.212 1+0 records out 00:08:19.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820413 s, 5.0 MB/s 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:19.212 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd0", 00:08:19.472 "bdev_name": "Nvme0n1" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd1", 00:08:19.472 "bdev_name": "Nvme1n1" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd2", 00:08:19.472 "bdev_name": "Nvme2n1" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd3", 00:08:19.472 "bdev_name": "Nvme2n2" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd4", 00:08:19.472 "bdev_name": "Nvme2n3" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd5", 00:08:19.472 "bdev_name": "Nvme3n1" 00:08:19.472 } 00:08:19.472 ]' 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd0", 00:08:19.472 "bdev_name": "Nvme0n1" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd1", 00:08:19.472 "bdev_name": "Nvme1n1" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd2", 00:08:19.472 "bdev_name": "Nvme2n1" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd3", 00:08:19.472 "bdev_name": "Nvme2n2" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd4", 00:08:19.472 "bdev_name": "Nvme2n3" 00:08:19.472 }, 00:08:19.472 { 00:08:19.472 "nbd_device": "/dev/nbd5", 00:08:19.472 "bdev_name": "Nvme3n1" 00:08:19.472 } 00:08:19.472 ]' 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.472 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.731 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.991 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:20.250 08:29:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.250 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.510 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.770 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:21.030 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.292 08:29:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:21.554 /dev/nbd0 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.554 1+0 records in 00:08:21.554 1+0 records out 00:08:21.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762927 s, 5.4 MB/s 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.554 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:21.813 /dev/nbd1 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.813 1+0 records in 00:08:21.813 1+0 records out 00:08:21.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601582 s, 6.8 MB/s 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.813 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:22.073 /dev/nbd10 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.073 1+0 records in 00:08:22.073 1+0 records out 00:08:22.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796592 s, 5.1 MB/s 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:22.073 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:22.333 /dev/nbd11 00:08:22.333 08:29:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.333 1+0 records in 00:08:22.333 1+0 records out 00:08:22.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508377 s, 8.1 MB/s 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:22.333 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:22.333 /dev/nbd12 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.592 1+0 records in 00:08:22.592 1+0 records out 00:08:22.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723561 s, 5.7 MB/s 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:22.592 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:22.851 /dev/nbd13 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.851 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.852 1+0 records in 00:08:22.852 1+0 records out 00:08:22.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707744 s, 5.8 MB/s 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.852 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.111 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.111 { 00:08:23.111 "nbd_device": "/dev/nbd0", 00:08:23.111 "bdev_name": "Nvme0n1" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd1", 00:08:23.112 "bdev_name": "Nvme1n1" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd10", 00:08:23.112 "bdev_name": "Nvme2n1" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd11", 00:08:23.112 "bdev_name": "Nvme2n2" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd12", 00:08:23.112 "bdev_name": "Nvme2n3" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd13", 00:08:23.112 "bdev_name": "Nvme3n1" 00:08:23.112 } 00:08:23.112 ]' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd0", 00:08:23.112 "bdev_name": "Nvme0n1" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd1", 00:08:23.112 "bdev_name": "Nvme1n1" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd10", 00:08:23.112 "bdev_name": "Nvme2n1" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd11", 00:08:23.112 "bdev_name": "Nvme2n2" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd12", 00:08:23.112 "bdev_name": "Nvme2n3" 00:08:23.112 }, 00:08:23.112 { 00:08:23.112 "nbd_device": "/dev/nbd13", 00:08:23.112 "bdev_name": "Nvme3n1" 00:08:23.112 } 00:08:23.112 ]' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.112 /dev/nbd1 00:08:23.112 /dev/nbd10 00:08:23.112 /dev/nbd11 00:08:23.112 /dev/nbd12 00:08:23.112 /dev/nbd13' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.112 /dev/nbd1 00:08:23.112 /dev/nbd10 00:08:23.112 /dev/nbd11 00:08:23.112 /dev/nbd12 00:08:23.112 /dev/nbd13' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:23.112 256+0 records in 00:08:23.112 256+0 records out 00:08:23.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392794 s, 267 MB/s 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:23.112 256+0 records in 00:08:23.112 256+0 records out 00:08:23.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0947473 s, 11.1 MB/s 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.112 08:29:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:23.371 256+0 records in 00:08:23.371 256+0 records out 00:08:23.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100677 s, 10.4 MB/s 00:08:23.371 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.371 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:23.371 256+0 records in 00:08:23.371 256+0 records out 00:08:23.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101754 s, 10.3 MB/s 00:08:23.371 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.371 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:23.631 256+0 records in 00:08:23.631 256+0 records out 00:08:23.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0974065 s, 10.8 MB/s 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:23.631 256+0 records in 00:08:23.631 256+0 records out 00:08:23.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10162 s, 10.3 MB/s 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:23.631 256+0 records in 00:08:23.631 256+0 records out 00:08:23.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.096359 s, 10.9 MB/s 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.631 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.890 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:24.149 08:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.149 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.409 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.669 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.929 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.189 08:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:25.450 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:25.710 malloc_lvol_verify 00:08:25.710 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:25.969 90878b21-090b-4bfd-9595-c6a0a2b1e425 00:08:25.969 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:25.969 f0b06893-8661-4ec0-95c0-c31675730096 00:08:26.229 08:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:26.229 /dev/nbd0 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:26.229 mke2fs 1.47.0 (5-Feb-2023) 00:08:26.229 Discarding device blocks: 0/4096 done 00:08:26.229 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:26.229 00:08:26.229 Allocating group tables: 0/1 done 00:08:26.229 Writing inode tables: 0/1 done 00:08:26.229 Creating journal (1024 blocks): done 00:08:26.229 Writing superblocks and filesystem accounting information: 0/1 done 00:08:26.229 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.229 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72812 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72812 ']' 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72812 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72812 00:08:26.489 killing process with pid 72812 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72812' 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72812 00:08:26.489 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72812 00:08:26.748 ************************************ 00:08:26.748 END TEST bdev_nbd 00:08:26.748 ************************************ 00:08:26.748 08:29:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:26.748 00:08:26.748 real 0m10.083s 00:08:26.748 user 0m14.211s 00:08:26.748 sys 0m4.037s 00:08:26.749 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.749 08:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:27.009 08:29:48 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:27.009 08:29:48 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:27.009 skipping fio tests on NVMe due to multi-ns failures. 00:08:27.009 08:29:48 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:27.009 08:29:48 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:27.009 08:29:48 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:27.009 08:29:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:27.009 08:29:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.009 08:29:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.009 ************************************ 00:08:27.009 START TEST bdev_verify 00:08:27.009 ************************************ 00:08:27.009 08:29:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:27.009 [2024-11-19 08:29:48.760746] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:27.009 [2024-11-19 08:29:48.760943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73185 ] 00:08:27.269 [2024-11-19 08:29:48.922493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.269 [2024-11-19 08:29:48.952731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.269 [2024-11-19 08:29:48.952916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.528 Running I/O for 5 seconds... 00:08:29.843 18624.00 IOPS, 72.75 MiB/s [2024-11-19T08:29:52.686Z] 18912.00 IOPS, 73.88 MiB/s [2024-11-19T08:29:53.624Z] 18709.33 IOPS, 73.08 MiB/s [2024-11-19T08:29:54.561Z] 19120.00 IOPS, 74.69 MiB/s [2024-11-19T08:29:54.561Z] 18700.80 IOPS, 73.05 MiB/s 00:08:32.654 Latency(us) 00:08:32.654 [2024-11-19T08:29:54.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.654 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x0 length 0xbd0bd 00:08:32.654 Nvme0n1 : 5.05 1494.84 5.84 0.00 0.00 85230.91 16713.11 77841.89 00:08:32.654 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:32.654 Nvme0n1 : 5.05 1572.24 6.14 0.00 0.00 81113.90 17056.53 74178.74 00:08:32.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x0 length 0xa0000 00:08:32.654 Nvme1n1 : 5.08 1500.51 5.86 0.00 0.00 84843.28 15453.90 76926.10 00:08:32.654 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0xa0000 length 0xa0000 00:08:32.654 Nvme1n1 : 5.05 1571.73 6.14 0.00 0.00 81001.67 19689.42 70057.70 00:08:32.654 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x0 length 0x80000 00:08:32.654 Nvme2n1 : 5.08 1499.39 5.86 0.00 0.00 84728.60 16942.06 77383.99 00:08:32.654 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x80000 length 0x80000 00:08:32.654 Nvme2n1 : 5.07 1576.76 6.16 0.00 0.00 80509.13 10932.21 70973.48 00:08:32.654 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x0 length 0x80000 00:08:32.654 Nvme2n2 : 5.08 1498.91 5.86 0.00 0.00 84616.59 16026.27 76926.10 00:08:32.654 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x80000 length 0x80000 00:08:32.654 Nvme2n2 : 5.08 1576.19 6.16 0.00 0.00 80391.16 9272.34 73262.95 00:08:32.654 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x0 length 0x80000 00:08:32.654 Nvme2n3 : 5.08 1498.50 5.85 0.00 0.00 84499.01 13736.80 76010.31 00:08:32.654 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x80000 length 0x80000 00:08:32.654 Nvme2n3 : 5.09 1585.28 6.19 0.00 0.00 79970.34 6897.02 75094.53 00:08:32.654 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x0 length 0x20000 00:08:32.654 Nvme3n1 : 5.08 1498.14 5.85 0.00 0.00 84373.41 12649.31 77841.89 00:08:32.654 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:32.654 Verification LBA range: start 0x20000 length 0x20000 00:08:32.654 Nvme3n1 : 5.09 1584.82 6.19 0.00 0.00 79868.12 7240.44 75552.42 00:08:32.654 [2024-11-19T08:29:54.561Z] =================================================================================================================== 00:08:32.654 [2024-11-19T08:29:54.561Z] Total : 18457.32 72.10 0.00 0.00 82539.99 6897.02 77841.89 00:08:33.222 00:08:33.222 real 0m6.436s 00:08:33.222 user 0m12.079s 00:08:33.222 sys 0m0.244s 00:08:33.222 08:29:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.222 08:29:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:33.222 ************************************ 00:08:33.222 END TEST bdev_verify 00:08:33.222 ************************************ 00:08:33.481 08:29:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:33.481 08:29:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:33.481 08:29:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.481 08:29:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.481 ************************************ 00:08:33.481 START TEST bdev_verify_big_io 00:08:33.481 ************************************ 00:08:33.481 08:29:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:33.481 [2024-11-19 08:29:55.255883] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:33.481 [2024-11-19 08:29:55.256029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73272 ] 00:08:33.740 [2024-11-19 08:29:55.413530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.740 [2024-11-19 08:29:55.445513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.740 [2024-11-19 08:29:55.445620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.998 Running I/O for 5 seconds... 00:08:39.213 1911.00 IOPS, 119.44 MiB/s [2024-11-19T08:30:01.687Z] 3260.00 IOPS, 203.75 MiB/s [2024-11-19T08:30:02.255Z] 3713.33 IOPS, 232.08 MiB/s 00:08:40.348 Latency(us) 00:08:40.348 [2024-11-19T08:30:02.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.348 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x0 length 0xbd0b 00:08:40.348 Nvme0n1 : 5.48 140.15 8.76 0.00 0.00 875078.28 12420.36 1267449.07 00:08:40.348 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:40.348 Nvme0n1 : 5.49 185.08 11.57 0.00 0.00 683937.95 9329.58 710650.63 00:08:40.348 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x0 length 0xa000 00:08:40.348 Nvme1n1 : 5.56 149.68 9.36 0.00 0.00 786681.45 39149.89 989049.85 00:08:40.348 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0xa000 length 0xa000 00:08:40.348 Nvme1n1 : 5.49 183.75 11.48 0.00 0.00 677653.86 10359.84 758271.55 00:08:40.348 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x0 length 0x8000 00:08:40.348 Nvme2n1 : 5.70 161.31 10.08 0.00 0.00 707618.52 29076.23 1186859.82 00:08:40.348 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x8000 length 0x8000 00:08:40.348 Nvme2n1 : 5.49 183.52 11.47 0.00 0.00 667561.52 9386.82 765597.85 00:08:40.348 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x0 length 0x8000 00:08:40.348 Nvme2n2 : 5.76 167.19 10.45 0.00 0.00 659482.17 26443.35 1597132.35 00:08:40.348 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x8000 length 0x8000 00:08:40.348 Nvme2n2 : 5.49 182.73 11.42 0.00 0.00 659180.61 9673.00 849850.24 00:08:40.348 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x0 length 0x8000 00:08:40.348 Nvme2n3 : 5.93 205.93 12.87 0.00 0.00 516417.14 7784.19 1611784.94 00:08:40.348 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x8000 length 0x8000 00:08:40.348 Nvme2n3 : 5.50 183.41 11.46 0.00 0.00 646156.22 10703.26 842523.95 00:08:40.348 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x0 length 0x2000 00:08:40.348 Nvme3n1 : 6.11 304.79 19.05 0.00 0.00 339874.56 665.38 1296754.25 00:08:40.348 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.348 Verification LBA range: start 0x2000 length 0x2000 00:08:40.348 Nvme3n1 : 5.50 186.25 11.64 0.00 0.00 625789.32 8699.98 791239.88 00:08:40.348 [2024-11-19T08:30:02.255Z] =================================================================================================================== 00:08:40.348 [2024-11-19T08:30:02.255Z] Total : 2233.79 139.61 0.00 0.00 623764.47 665.38 1611784.94 00:08:41.723 00:08:41.723 real 0m8.049s 00:08:41.723 user 0m15.266s 00:08:41.723 sys 0m0.274s 00:08:41.723 08:30:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.723 08:30:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:41.723 ************************************ 00:08:41.723 END TEST bdev_verify_big_io 00:08:41.723 ************************************ 00:08:41.723 08:30:03 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.723 08:30:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:41.723 08:30:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.723 08:30:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.723 ************************************ 00:08:41.723 START TEST bdev_write_zeroes 00:08:41.723 ************************************ 00:08:41.723 08:30:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.723 [2024-11-19 08:30:03.385204] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:41.723 [2024-11-19 08:30:03.385362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73381 ] 00:08:41.723 [2024-11-19 08:30:03.544531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.723 [2024-11-19 08:30:03.571388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.289 Running I/O for 1 seconds... 00:08:43.221 61056.00 IOPS, 238.50 MiB/s 00:08:43.221 Latency(us) 00:08:43.221 [2024-11-19T08:30:05.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.221 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:43.221 Nvme0n1 : 1.02 10151.40 39.65 0.00 0.00 12573.24 10073.66 26901.24 00:08:43.221 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:43.221 Nvme1n1 : 1.02 10139.53 39.61 0.00 0.00 12570.61 10417.08 27130.19 00:08:43.221 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:43.221 Nvme2n1 : 1.02 10127.72 39.56 0.00 0.00 12527.74 10016.42 26901.24 00:08:43.221 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:43.221 Nvme2n2 : 1.03 10170.51 39.73 0.00 0.00 12421.69 6582.22 25298.61 00:08:43.221 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:43.221 Nvme2n3 : 1.03 10161.48 39.69 0.00 0.00 12401.35 6524.98 24840.72 00:08:43.221 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:43.221 Nvme3n1 : 1.03 10151.67 39.65 0.00 0.00 12380.54 6696.69 25184.14 00:08:43.221 [2024-11-19T08:30:05.128Z] =================================================================================================================== 00:08:43.221 [2024-11-19T08:30:05.128Z] Total : 60902.30 237.90 0.00 0.00 12478.96 6524.98 27130.19 00:08:43.480 00:08:43.480 real 0m1.931s 00:08:43.480 user 0m1.607s 00:08:43.480 sys 0m0.213s 00:08:43.480 08:30:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.480 08:30:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 ************************************ 00:08:43.480 END TEST bdev_write_zeroes 00:08:43.480 ************************************ 00:08:43.480 08:30:05 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:43.480 08:30:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:43.480 08:30:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.480 08:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 ************************************ 00:08:43.480 START TEST bdev_json_nonenclosed 00:08:43.480 ************************************ 00:08:43.480 08:30:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:43.480 [2024-11-19 08:30:05.376851] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:43.480 [2024-11-19 08:30:05.376990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73423 ] 00:08:43.740 [2024-11-19 08:30:05.530703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.740 [2024-11-19 08:30:05.558934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.740 [2024-11-19 08:30:05.559039] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:43.740 [2024-11-19 08:30:05.559057] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:43.740 [2024-11-19 08:30:05.559070] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.000 00:08:44.000 real 0m0.354s 00:08:44.000 user 0m0.145s 00:08:44.000 sys 0m0.106s 00:08:44.000 08:30:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.000 08:30:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:44.000 ************************************ 00:08:44.000 END TEST bdev_json_nonenclosed 00:08:44.000 ************************************ 00:08:44.000 08:30:05 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:44.000 08:30:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:44.000 08:30:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.000 08:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.000 ************************************ 00:08:44.000 START TEST bdev_json_nonarray 00:08:44.000 ************************************ 00:08:44.000 08:30:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:44.000 [2024-11-19 08:30:05.796406] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:44.000 [2024-11-19 08:30:05.796516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73443 ] 00:08:44.260 [2024-11-19 08:30:05.944990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.260 [2024-11-19 08:30:05.969936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.260 [2024-11-19 08:30:05.970034] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:44.260 [2024-11-19 08:30:05.970050] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:44.260 [2024-11-19 08:30:05.970061] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.260 00:08:44.260 real 0m0.341s 00:08:44.260 user 0m0.141s 00:08:44.260 sys 0m0.097s 00:08:44.260 08:30:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.260 08:30:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:44.260 ************************************ 00:08:44.260 END TEST bdev_json_nonarray 00:08:44.260 ************************************ 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:44.260 08:30:06 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:44.260 00:08:44.260 real 0m32.285s 00:08:44.260 user 0m50.132s 00:08:44.260 sys 0m6.575s 00:08:44.260 08:30:06 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.260 08:30:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.260 ************************************ 00:08:44.260 END TEST blockdev_nvme 00:08:44.260 ************************************ 00:08:44.521 08:30:06 -- spdk/autotest.sh@209 -- # uname -s 00:08:44.521 08:30:06 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:44.521 08:30:06 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:44.521 08:30:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.521 08:30:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.521 08:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:44.521 ************************************ 00:08:44.521 START TEST blockdev_nvme_gpt 00:08:44.521 ************************************ 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:44.521 * Looking for test storage... 00:08:44.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.521 08:30:06 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.521 --rc genhtml_branch_coverage=1 00:08:44.521 --rc genhtml_function_coverage=1 00:08:44.521 --rc genhtml_legend=1 00:08:44.521 --rc geninfo_all_blocks=1 00:08:44.521 --rc geninfo_unexecuted_blocks=1 00:08:44.521 00:08:44.521 ' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.521 --rc genhtml_branch_coverage=1 00:08:44.521 --rc genhtml_function_coverage=1 00:08:44.521 --rc genhtml_legend=1 00:08:44.521 --rc geninfo_all_blocks=1 00:08:44.521 --rc geninfo_unexecuted_blocks=1 00:08:44.521 00:08:44.521 ' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.521 --rc genhtml_branch_coverage=1 00:08:44.521 --rc genhtml_function_coverage=1 00:08:44.521 --rc genhtml_legend=1 00:08:44.521 --rc geninfo_all_blocks=1 00:08:44.521 --rc geninfo_unexecuted_blocks=1 00:08:44.521 00:08:44.521 ' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.521 --rc genhtml_branch_coverage=1 00:08:44.521 --rc genhtml_function_coverage=1 00:08:44.521 --rc genhtml_legend=1 00:08:44.521 --rc geninfo_all_blocks=1 00:08:44.521 --rc geninfo_unexecuted_blocks=1 00:08:44.521 00:08:44.521 ' 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:44.521 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73527 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73527 00:08:44.781 08:30:06 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:44.781 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 73527 ']' 00:08:44.781 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.781 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.781 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.781 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.781 08:30:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:44.781 [2024-11-19 08:30:06.525459] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:44.781 [2024-11-19 08:30:06.525595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:08:44.781 [2024-11-19 08:30:06.682783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.041 [2024-11-19 08:30:06.710880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.611 08:30:07 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.611 08:30:07 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:45.611 08:30:07 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:45.611 08:30:07 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:45.611 08:30:07 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:46.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.184 Waiting for block devices as requested 00:08:46.443 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.443 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.443 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.702 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.022 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:52.022 BYT; 00:08:52.022 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:52.022 BYT; 00:08:52.022 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:52.022 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:52.022 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:52.023 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:52.023 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:52.023 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:52.023 08:30:13 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:52.023 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:52.023 08:30:13 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:52.960 The operation has completed successfully. 00:08:52.960 08:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:53.907 The operation has completed successfully. 00:08:53.907 08:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.413 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.413 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.413 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.413 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:55.413 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.413 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.413 [] 00:08:55.413 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.413 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:55.413 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:55.413 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:55.413 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.672 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:55.672 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.672 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.933 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.934 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5744c7ae-e49b-48e4-b60b-caf2388e782a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5744c7ae-e49b-48e4-b60b-caf2388e782a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5558fce5-daa0-4a0d-bfa8-acd4853d112f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5558fce5-daa0-4a0d-bfa8-acd4853d112f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "dac76f20-b6b5-46b8-9f02-d06f882d71c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dac76f20-b6b5-46b8-9f02-d06f882d71c7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1a7c816a-36c6-4ef2-84d0-9a27eb68dbc0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a7c816a-36c6-4ef2-84d0-9a27eb68dbc0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0341f9e4-983a-46ca-8fe6-aaae81be4d99"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0341f9e4-983a-46ca-8fe6-aaae81be4d99",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:55.934 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:56.195 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:56.195 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:56.195 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:56.195 08:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73527 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 73527 ']' 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 73527 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73527 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.195 killing process with pid 73527 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73527' 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 73527 00:08:56.195 08:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 73527 00:08:56.455 08:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:56.455 08:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:56.455 08:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:56.455 08:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.455 08:30:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:56.455 ************************************ 00:08:56.455 START TEST bdev_hello_world 00:08:56.455 ************************************ 00:08:56.455 08:30:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:56.777 [2024-11-19 08:30:18.380217] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:56.777 [2024-11-19 08:30:18.380398] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74141 ] 00:08:56.777 [2024-11-19 08:30:18.538600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.777 [2024-11-19 08:30:18.567400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.344 [2024-11-19 08:30:18.961340] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:57.344 [2024-11-19 08:30:18.961395] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:57.344 [2024-11-19 08:30:18.961420] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:57.344 [2024-11-19 08:30:18.963789] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:57.344 [2024-11-19 08:30:18.964238] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:57.344 [2024-11-19 08:30:18.964277] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:57.344 [2024-11-19 08:30:18.964436] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:57.344 00:08:57.344 [2024-11-19 08:30:18.964480] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:57.344 00:08:57.344 real 0m0.888s 00:08:57.344 user 0m0.576s 00:08:57.344 sys 0m0.209s 00:08:57.344 08:30:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.344 08:30:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:57.344 ************************************ 00:08:57.344 END TEST bdev_hello_world 00:08:57.344 ************************************ 00:08:57.344 08:30:19 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:57.344 08:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.344 08:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.344 08:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:57.344 ************************************ 00:08:57.344 START TEST bdev_bounds 00:08:57.344 ************************************ 00:08:57.344 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74172 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74172' 00:08:57.603 Process bdevio pid: 74172 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74172 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74172 ']' 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.603 08:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:57.603 [2024-11-19 08:30:19.334919] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:57.603 [2024-11-19 08:30:19.335461] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74172 ] 00:08:57.603 [2024-11-19 08:30:19.477505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.862 [2024-11-19 08:30:19.510875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.862 [2024-11-19 08:30:19.510934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.862 [2024-11-19 08:30:19.510994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.430 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.430 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:58.430 08:30:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:58.691 I/O targets: 00:08:58.691 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:58.691 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:58.691 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:58.691 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:58.691 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:58.691 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:58.691 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:58.691 00:08:58.691 00:08:58.691 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.691 http://cunit.sourceforge.net/ 00:08:58.691 00:08:58.691 00:08:58.691 Suite: bdevio tests on: Nvme3n1 00:08:58.691 Test: blockdev write read block ...passed 00:08:58.691 Test: blockdev write zeroes read block ...passed 00:08:58.691 Test: blockdev write zeroes read no split ...passed 00:08:58.691 Test: blockdev write zeroes read split ...passed 00:08:58.691 Test: blockdev write zeroes read split partial ...passed 00:08:58.691 Test: blockdev reset ...[2024-11-19 08:30:20.430118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:58.691 [2024-11-19 08:30:20.432197] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:58.691 passed 00:08:58.691 Test: blockdev write read 8 blocks ...passed 00:08:58.691 Test: blockdev write read size > 128k ...passed 00:08:58.691 Test: blockdev write read invalid size ...passed 00:08:58.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.691 Test: blockdev write read max offset ...passed 00:08:58.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.691 Test: blockdev writev readv 8 blocks ...passed 00:08:58.691 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.691 Test: blockdev writev readv block ...passed 00:08:58.691 Test: blockdev writev readv size > 128k ...passed 00:08:58.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.691 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.437958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c000a000 len:0x1000 00:08:58.691 [2024-11-19 08:30:20.438016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:58.691 passed 00:08:58.691 Test: blockdev nvme passthru rw ...passed 00:08:58.691 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:30:20.438735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:58.691 [2024-11-19 08:30:20.438787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:58.691 passed 00:08:58.691 Test: blockdev nvme admin passthru ...passed 00:08:58.691 Test: blockdev copy ...passed 00:08:58.691 Suite: bdevio tests on: Nvme2n3 00:08:58.691 Test: blockdev write read block ...passed 00:08:58.691 Test: blockdev write zeroes read block ...passed 00:08:58.691 Test: blockdev write zeroes read no split ...passed 00:08:58.691 Test: blockdev write zeroes read split ...passed 00:08:58.691 Test: blockdev write zeroes read split partial ...passed 00:08:58.691 Test: blockdev reset ...[2024-11-19 08:30:20.455645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:58.691 [2024-11-19 08:30:20.458022] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:58.691 passed 00:08:58.691 Test: blockdev write read 8 blocks ...passed 00:08:58.691 Test: blockdev write read size > 128k ...passed 00:08:58.691 Test: blockdev write read invalid size ...passed 00:08:58.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.691 Test: blockdev write read max offset ...passed 00:08:58.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.691 Test: blockdev writev readv 8 blocks ...passed 00:08:58.691 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.691 Test: blockdev writev readv block ...passed 00:08:58.691 Test: blockdev writev readv size > 128k ...passed 00:08:58.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.691 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.464024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5004000 len:0x1000 00:08:58.691 [2024-11-19 08:30:20.464085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:58.691 passed 00:08:58.691 Test: blockdev nvme passthru rw ...passed 00:08:58.691 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:30:20.464812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:58.691 [2024-11-19 08:30:20.464863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:58.691 passed 00:08:58.691 Test: blockdev nvme admin passthru ...passed 00:08:58.691 Test: blockdev copy ...passed 00:08:58.691 Suite: bdevio tests on: Nvme2n2 00:08:58.691 Test: blockdev write read block ...passed 00:08:58.691 Test: blockdev write zeroes read block ...passed 00:08:58.691 Test: blockdev write zeroes read no split ...passed 00:08:58.691 Test: blockdev write zeroes read split ...passed 00:08:58.691 Test: blockdev write zeroes read split partial ...passed 00:08:58.691 Test: blockdev reset ...[2024-11-19 08:30:20.484095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:58.691 [2024-11-19 08:30:20.486460] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:58.691 passed 00:08:58.691 Test: blockdev write read 8 blocks ...passed 00:08:58.691 Test: blockdev write read size > 128k ...passed 00:08:58.691 Test: blockdev write read invalid size ...passed 00:08:58.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.691 Test: blockdev write read max offset ...passed 00:08:58.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.691 Test: blockdev writev readv 8 blocks ...passed 00:08:58.691 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.691 Test: blockdev writev readv block ...passed 00:08:58.691 Test: blockdev writev readv size > 128k ...passed 00:08:58.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.691 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.492699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5004000 len:0x1000 00:08:58.691 [2024-11-19 08:30:20.492772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:58.691 passed 00:08:58.691 Test: blockdev nvme passthru rw ...passed 00:08:58.691 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:30:20.493458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:58.691 [2024-11-19 08:30:20.493518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:58.691 passed 00:08:58.691 Test: blockdev nvme admin passthru ...passed 00:08:58.691 Test: blockdev copy ...passed 00:08:58.691 Suite: bdevio tests on: Nvme2n1 00:08:58.691 Test: blockdev write read block ...passed 00:08:58.691 Test: blockdev write zeroes read block ...passed 00:08:58.691 Test: blockdev write zeroes read no split ...passed 00:08:58.691 Test: blockdev write zeroes read split ...passed 00:08:58.691 Test: blockdev write zeroes read split partial ...passed 00:08:58.691 Test: blockdev reset ...[2024-11-19 08:30:20.515410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:58.691 [2024-11-19 08:30:20.517618] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:58.691 passed 00:08:58.691 Test: blockdev write read 8 blocks ...passed 00:08:58.691 Test: blockdev write read size > 128k ...passed 00:08:58.691 Test: blockdev write read invalid size ...passed 00:08:58.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.691 Test: blockdev write read max offset ...passed 00:08:58.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.692 Test: blockdev writev readv 8 blocks ...passed 00:08:58.692 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.692 Test: blockdev writev readv block ...passed 00:08:58.692 Test: blockdev writev readv size > 128k ...passed 00:08:58.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.692 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5006000 len:0x1000 00:08:58.692 [2024-11-19 08:30:20.524161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:58.692 passed 00:08:58.692 Test: blockdev nvme passthru rw ...passed 00:08:58.692 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:30:20.524957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:58.692 [2024-11-19 08:30:20.525018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:58.692 passed 00:08:58.692 Test: blockdev nvme admin passthru ...passed 00:08:58.692 Test: blockdev copy ...passed 00:08:58.692 Suite: bdevio tests on: Nvme1n1p2 00:08:58.692 Test: blockdev write read block ...passed 00:08:58.692 Test: blockdev write zeroes read block ...passed 00:08:58.692 Test: blockdev write zeroes read no split ...passed 00:08:58.692 Test: blockdev write zeroes read split ...passed 00:08:58.692 Test: blockdev write zeroes read split partial ...passed 00:08:58.692 Test: blockdev reset ...[2024-11-19 08:30:20.549092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:58.692 [2024-11-19 08:30:20.551217] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:58.692 passed 00:08:58.692 Test: blockdev write read 8 blocks ...passed 00:08:58.692 Test: blockdev write read size > 128k ...passed 00:08:58.692 Test: blockdev write read invalid size ...passed 00:08:58.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.692 Test: blockdev write read max offset ...passed 00:08:58.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.692 Test: blockdev writev readv 8 blocks ...passed 00:08:58.692 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.692 Test: blockdev writev readv block ...passed 00:08:58.692 Test: blockdev writev readv size > 128k ...passed 00:08:58.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.692 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.558024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2b5002000 len:0x1000 00:08:58.692 [2024-11-19 08:30:20.558074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:58.692 passed 00:08:58.692 Test: blockdev nvme passthru rw ...passed 00:08:58.692 Test: blockdev nvme passthru vendor specific ...passed 00:08:58.692 Test: blockdev nvme admin passthru ...passed 00:08:58.692 Test: blockdev copy ...passed 00:08:58.692 Suite: bdevio tests on: Nvme1n1p1 00:08:58.692 Test: blockdev write read block ...passed 00:08:58.692 Test: blockdev write zeroes read block ...passed 00:08:58.692 Test: blockdev write zeroes read no split ...passed 00:08:58.692 Test: blockdev write zeroes read split ...passed 00:08:58.692 Test: blockdev write zeroes read split partial ...passed 00:08:58.692 Test: blockdev reset ...[2024-11-19 08:30:20.580569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:58.692 [2024-11-19 08:30:20.582594] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:58.692 passed 00:08:58.692 Test: blockdev write read 8 blocks ...passed 00:08:58.692 Test: blockdev write read size > 128k ...passed 00:08:58.692 Test: blockdev write read invalid size ...passed 00:08:58.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.692 Test: blockdev write read max offset ...passed 00:08:58.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.692 Test: blockdev writev readv 8 blocks ...passed 00:08:58.692 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.692 Test: blockdev writev readv block ...passed 00:08:58.692 Test: blockdev writev readv size > 128k ...passed 00:08:58.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.692 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.589276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2caa3b000 len:0x1000 00:08:58.692 [2024-11-19 08:30:20.589329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:58.692 passed 00:08:58.692 Test: blockdev nvme passthru rw ...passed 00:08:58.692 Test: blockdev nvme passthru vendor specific ...passed 00:08:58.692 Test: blockdev nvme admin passthru ...passed 00:08:58.692 Test: blockdev copy ...passed 00:08:58.692 Suite: bdevio tests on: Nvme0n1 00:08:58.692 Test: blockdev write read block ...passed 00:08:58.692 Test: blockdev write zeroes read block ...passed 00:08:58.953 Test: blockdev write zeroes read no split ...passed 00:08:58.953 Test: blockdev write zeroes read split ...passed 00:08:58.953 Test: blockdev write zeroes read split partial ...passed 00:08:58.953 Test: blockdev reset ...[2024-11-19 08:30:20.611234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:58.953 [2024-11-19 08:30:20.613252] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:58.953 passed 00:08:58.953 Test: blockdev write read 8 blocks ...passed 00:08:58.953 Test: blockdev write read size > 128k ...passed 00:08:58.953 Test: blockdev write read invalid size ...passed 00:08:58.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.953 Test: blockdev write read max offset ...passed 00:08:58.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.953 Test: blockdev writev readv 8 blocks ...passed 00:08:58.953 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.953 Test: blockdev writev readv block ...passed 00:08:58.953 Test: blockdev writev readv size > 128k ...passed 00:08:58.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.953 Test: blockdev comparev and writev ...[2024-11-19 08:30:20.618614] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:58.953 separate metadata which is not supported yet. 00:08:58.953 passed 00:08:58.953 Test: blockdev nvme passthru rw ...passed 00:08:58.953 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:30:20.619221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:58.953 [2024-11-19 08:30:20.619280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:58.953 passed 00:08:58.953 Test: blockdev nvme admin passthru ...passed 00:08:58.953 Test: blockdev copy ...passed 00:08:58.953 00:08:58.953 Run Summary: Type Total Ran Passed Failed Inactive 00:08:58.953 suites 7 7 n/a 0 0 00:08:58.953 tests 161 161 161 0 0 00:08:58.953 asserts 1025 1025 1025 0 n/a 00:08:58.953 00:08:58.953 Elapsed time = 0.501 seconds 00:08:58.953 0 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74172 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74172 ']' 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74172 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74172 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.953 killing process with pid 74172 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74172' 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74172 00:08:58.953 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74172 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:59.212 00:08:59.212 real 0m1.637s 00:08:59.212 user 0m4.292s 00:08:59.212 sys 0m0.367s 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:59.212 ************************************ 00:08:59.212 END TEST bdev_bounds 00:08:59.212 ************************************ 00:08:59.212 08:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:59.212 08:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:59.212 08:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.212 08:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.212 ************************************ 00:08:59.212 START TEST bdev_nbd 00:08:59.212 ************************************ 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74221 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74221 /var/tmp/spdk-nbd.sock 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74221 ']' 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.212 08:30:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:59.212 [2024-11-19 08:30:21.047577] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:59.212 [2024-11-19 08:30:21.047751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.490 [2024-11-19 08:30:21.193279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.490 [2024-11-19 08:30:21.224016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:00.429 08:30:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.429 1+0 records in 00:09:00.429 1+0 records out 00:09:00.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045491 s, 9.0 MB/s 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:00.429 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.689 1+0 records in 00:09:00.689 1+0 records out 00:09:00.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683228 s, 6.0 MB/s 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:00.689 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.949 1+0 records in 00:09:00.949 1+0 records out 00:09:00.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602281 s, 6.8 MB/s 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:00.949 08:30:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.208 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.468 1+0 records in 00:09:01.468 1+0 records out 00:09:01.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645481 s, 6.3 MB/s 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.468 1+0 records in 00:09:01.468 1+0 records out 00:09:01.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461999 s, 8.9 MB/s 00:09:01.468 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.727 1+0 records in 00:09:01.727 1+0 records out 00:09:01.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005655 s, 7.2 MB/s 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:01.727 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.986 1+0 records in 00:09:01.986 1+0 records out 00:09:01.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681431 s, 6.0 MB/s 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.986 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:02.245 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:02.245 08:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.245 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd0", 00:09:02.245 "bdev_name": "Nvme0n1" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd1", 00:09:02.245 "bdev_name": "Nvme1n1p1" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd2", 00:09:02.245 "bdev_name": "Nvme1n1p2" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd3", 00:09:02.245 "bdev_name": "Nvme2n1" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd4", 00:09:02.245 "bdev_name": "Nvme2n2" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd5", 00:09:02.245 "bdev_name": "Nvme2n3" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd6", 00:09:02.245 "bdev_name": "Nvme3n1" 00:09:02.245 } 00:09:02.245 ]' 00:09:02.245 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:02.245 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd0", 00:09:02.245 "bdev_name": "Nvme0n1" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd1", 00:09:02.245 "bdev_name": "Nvme1n1p1" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd2", 00:09:02.245 "bdev_name": "Nvme1n1p2" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd3", 00:09:02.245 "bdev_name": "Nvme2n1" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd4", 00:09:02.245 "bdev_name": "Nvme2n2" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd5", 00:09:02.245 "bdev_name": "Nvme2n3" 00:09:02.245 }, 00:09:02.245 { 00:09:02.245 "nbd_device": "/dev/nbd6", 00:09:02.245 "bdev_name": "Nvme3n1" 00:09:02.245 } 00:09:02.245 ]' 00:09:02.245 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.504 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.763 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.022 08:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.281 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.540 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.541 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.799 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.059 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.319 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:04.319 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:04.319 08:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:04.319 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:04.578 /dev/nbd0 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.578 1+0 records in 00:09:04.578 1+0 records out 00:09:04.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617705 s, 6.6 MB/s 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:04.578 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:04.838 /dev/nbd1 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.838 1+0 records in 00:09:04.838 1+0 records out 00:09:04.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555131 s, 7.4 MB/s 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:04.838 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:05.097 /dev/nbd10 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.097 1+0 records in 00:09:05.097 1+0 records out 00:09:05.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631735 s, 6.5 MB/s 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:05.097 08:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:05.357 /dev/nbd11 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.357 1+0 records in 00:09:05.357 1+0 records out 00:09:05.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687976 s, 6.0 MB/s 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:05.357 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:05.616 /dev/nbd12 00:09:05.616 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:05.616 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:05.616 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:05.616 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.616 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.617 1+0 records in 00:09:05.617 1+0 records out 00:09:05.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768979 s, 5.3 MB/s 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:05.617 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:05.876 /dev/nbd13 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.876 1+0 records in 00:09:05.876 1+0 records out 00:09:05.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073411 s, 5.6 MB/s 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:05.876 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:06.135 /dev/nbd14 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:06.135 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.136 1+0 records in 00:09:06.136 1+0 records out 00:09:06.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649751 s, 6.3 MB/s 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.136 08:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd0", 00:09:06.395 "bdev_name": "Nvme0n1" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd1", 00:09:06.395 "bdev_name": "Nvme1n1p1" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd10", 00:09:06.395 "bdev_name": "Nvme1n1p2" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd11", 00:09:06.395 "bdev_name": "Nvme2n1" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd12", 00:09:06.395 "bdev_name": "Nvme2n2" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd13", 00:09:06.395 "bdev_name": "Nvme2n3" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd14", 00:09:06.395 "bdev_name": "Nvme3n1" 00:09:06.395 } 00:09:06.395 ]' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd0", 00:09:06.395 "bdev_name": "Nvme0n1" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd1", 00:09:06.395 "bdev_name": "Nvme1n1p1" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd10", 00:09:06.395 "bdev_name": "Nvme1n1p2" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd11", 00:09:06.395 "bdev_name": "Nvme2n1" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd12", 00:09:06.395 "bdev_name": "Nvme2n2" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd13", 00:09:06.395 "bdev_name": "Nvme2n3" 00:09:06.395 }, 00:09:06.395 { 00:09:06.395 "nbd_device": "/dev/nbd14", 00:09:06.395 "bdev_name": "Nvme3n1" 00:09:06.395 } 00:09:06.395 ]' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:06.395 /dev/nbd1 00:09:06.395 /dev/nbd10 00:09:06.395 /dev/nbd11 00:09:06.395 /dev/nbd12 00:09:06.395 /dev/nbd13 00:09:06.395 /dev/nbd14' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:06.395 /dev/nbd1 00:09:06.395 /dev/nbd10 00:09:06.395 /dev/nbd11 00:09:06.395 /dev/nbd12 00:09:06.395 /dev/nbd13 00:09:06.395 /dev/nbd14' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:06.395 256+0 records in 00:09:06.395 256+0 records out 00:09:06.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136481 s, 76.8 MB/s 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.395 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:06.654 256+0 records in 00:09:06.654 256+0 records out 00:09:06.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10832 s, 9.7 MB/s 00:09:06.654 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.654 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.654 256+0 records in 00:09:06.654 256+0 records out 00:09:06.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113192 s, 9.3 MB/s 00:09:06.654 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.654 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:06.654 256+0 records in 00:09:06.654 256+0 records out 00:09:06.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108978 s, 9.6 MB/s 00:09:06.654 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.654 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:06.913 256+0 records in 00:09:06.913 256+0 records out 00:09:06.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109033 s, 9.6 MB/s 00:09:06.913 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.913 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:06.913 256+0 records in 00:09:06.913 256+0 records out 00:09:06.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111936 s, 9.4 MB/s 00:09:06.913 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.913 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:07.173 256+0 records in 00:09:07.173 256+0 records out 00:09:07.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111567 s, 9.4 MB/s 00:09:07.173 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:07.173 256+0 records in 00:09:07.173 256+0 records out 00:09:07.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11032 s, 9.5 MB/s 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.173 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.432 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.692 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:07.950 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:07.950 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.951 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.210 08:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.468 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.727 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.987 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:09.247 08:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:09.570 malloc_lvol_verify 00:09:09.570 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:09.570 854839ad-41af-43cd-8042-8c1b2f07074a 00:09:09.570 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:09.829 d2a93426-ca11-48a7-b1dd-927cdab534d6 00:09:09.830 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:10.089 /dev/nbd0 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:10.089 mke2fs 1.47.0 (5-Feb-2023) 00:09:10.089 Discarding device blocks: 0/4096 done 00:09:10.089 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:10.089 00:09:10.089 Allocating group tables: 0/1 done 00:09:10.089 Writing inode tables: 0/1 done 00:09:10.089 Creating journal (1024 blocks): done 00:09:10.089 Writing superblocks and filesystem accounting information: 0/1 done 00:09:10.089 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.089 08:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74221 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74221 ']' 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74221 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74221 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.349 killing process with pid 74221 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74221' 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74221 00:09:10.349 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74221 00:09:10.610 08:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:10.610 00:09:10.610 real 0m11.375s 00:09:10.610 user 0m15.948s 00:09:10.610 sys 0m4.602s 00:09:10.610 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.610 ************************************ 00:09:10.610 END TEST bdev_nbd 00:09:10.610 08:30:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:10.610 ************************************ 00:09:10.610 08:30:32 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:10.610 08:30:32 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:10.610 08:30:32 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:10.610 skipping fio tests on NVMe due to multi-ns failures. 00:09:10.610 08:30:32 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:10.610 08:30:32 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:10.610 08:30:32 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:10.610 08:30:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:10.610 08:30:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.610 08:30:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:10.610 ************************************ 00:09:10.610 START TEST bdev_verify 00:09:10.610 ************************************ 00:09:10.610 08:30:32 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:10.610 [2024-11-19 08:30:32.468143] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:10.610 [2024-11-19 08:30:32.468277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74638 ] 00:09:10.869 [2024-11-19 08:30:32.624692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.869 [2024-11-19 08:30:32.655120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.869 [2024-11-19 08:30:32.655219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.438 Running I/O for 5 seconds... 00:09:13.753 19456.00 IOPS, 76.00 MiB/s [2024-11-19T08:30:36.597Z] 20032.00 IOPS, 78.25 MiB/s [2024-11-19T08:30:37.536Z] 19925.33 IOPS, 77.83 MiB/s [2024-11-19T08:30:38.474Z] 19808.00 IOPS, 77.38 MiB/s [2024-11-19T08:30:38.474Z] 19865.60 IOPS, 77.60 MiB/s 00:09:16.567 Latency(us) 00:09:16.567 [2024-11-19T08:30:38.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.567 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.567 Verification LBA range: start 0x0 length 0xbd0bd 00:09:16.567 Nvme0n1 : 5.07 1401.13 5.47 0.00 0.00 90913.10 12248.65 83794.50 00:09:16.567 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.567 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:16.568 Nvme0n1 : 5.08 1385.34 5.41 0.00 0.00 91378.30 18544.68 80589.25 00:09:16.568 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x0 length 0x4ff80 00:09:16.568 Nvme1n1p1 : 5.07 1400.56 5.47 0.00 0.00 90841.61 11790.76 82878.71 00:09:16.568 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:16.568 Nvme1n1p1 : 5.08 1384.89 5.41 0.00 0.00 91254.14 13164.44 83336.61 00:09:16.568 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x0 length 0x4ff7f 00:09:16.568 Nvme1n1p2 : 5.09 1408.11 5.50 0.00 0.00 90417.60 13336.15 82420.82 00:09:16.568 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:16.568 Nvme1n1p2 : 5.07 1387.96 5.42 0.00 0.00 92038.28 20032.84 84710.29 00:09:16.568 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x0 length 0x80000 00:09:16.568 Nvme2n1 : 5.09 1407.72 5.50 0.00 0.00 90266.84 13221.67 79673.46 00:09:16.568 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x80000 length 0x80000 00:09:16.568 Nvme2n1 : 5.08 1387.12 5.42 0.00 0.00 91913.17 20834.15 81962.93 00:09:16.568 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x0 length 0x80000 00:09:16.568 Nvme2n2 : 5.09 1407.36 5.50 0.00 0.00 90110.70 13221.67 78757.67 00:09:16.568 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x80000 length 0x80000 00:09:16.568 Nvme2n2 : 5.08 1386.78 5.42 0.00 0.00 91783.47 20147.31 79673.46 00:09:16.568 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x0 length 0x80000 00:09:16.568 Nvme2n3 : 5.09 1407.01 5.50 0.00 0.00 89955.47 12992.73 81047.14 00:09:16.568 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x80000 length 0x80000 00:09:16.568 Nvme2n3 : 5.08 1386.32 5.42 0.00 0.00 91663.28 19460.47 80131.35 00:09:16.568 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x0 length 0x20000 00:09:16.568 Nvme3n1 : 5.10 1406.66 5.49 0.00 0.00 89815.64 12763.78 84252.39 00:09:16.568 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.568 Verification LBA range: start 0x20000 length 0x20000 00:09:16.568 Nvme3n1 : 5.08 1385.83 5.41 0.00 0.00 91513.51 18773.63 79215.57 00:09:16.568 [2024-11-19T08:30:38.475Z] =================================================================================================================== 00:09:16.568 [2024-11-19T08:30:38.475Z] Total : 19542.80 76.34 0.00 0.00 90984.57 11790.76 84710.29 00:09:17.171 00:09:17.171 real 0m6.441s 00:09:17.171 user 0m12.088s 00:09:17.171 sys 0m0.253s 00:09:17.171 08:30:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.171 08:30:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:17.171 ************************************ 00:09:17.171 END TEST bdev_verify 00:09:17.171 ************************************ 00:09:17.171 08:30:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:17.171 08:30:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:17.171 08:30:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.171 08:30:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.171 ************************************ 00:09:17.171 START TEST bdev_verify_big_io 00:09:17.171 ************************************ 00:09:17.171 08:30:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:17.171 [2024-11-19 08:30:38.951788] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:17.171 [2024-11-19 08:30:38.951926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74726 ] 00:09:17.456 [2024-11-19 08:30:39.106114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.456 [2024-11-19 08:30:39.136898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.456 [2024-11-19 08:30:39.137015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.720 Running I/O for 5 seconds... 00:09:22.484 2598.00 IOPS, 162.38 MiB/s [2024-11-19T08:30:45.770Z] 3075.00 IOPS, 192.19 MiB/s [2024-11-19T08:30:45.770Z] 3036.00 IOPS, 189.75 MiB/s 00:09:23.863 Latency(us) 00:09:23.863 [2024-11-19T08:30:45.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.863 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0xbd0b 00:09:23.863 Nvme0n1 : 5.66 118.82 7.43 0.00 0.00 1039040.47 46705.13 1296754.25 00:09:23.863 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:23.863 Nvme0n1 : 5.72 134.15 8.38 0.00 0.00 911787.04 30220.97 959744.67 00:09:23.863 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0x4ff8 00:09:23.863 Nvme1n1p1 : 5.73 115.47 7.22 0.00 0.00 1044692.55 74636.63 1992752.29 00:09:23.863 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:23.863 Nvme1n1p1 : 5.73 136.88 8.56 0.00 0.00 885236.24 86541.86 824208.21 00:09:23.863 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0x4ff7 00:09:23.863 Nvme1n1p2 : 5.73 134.63 8.41 0.00 0.00 878328.90 74178.74 871829.13 00:09:23.863 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:23.863 Nvme1n1p2 : 5.73 130.72 8.17 0.00 0.00 905000.73 87457.65 1531195.70 00:09:23.863 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0x8000 00:09:23.863 Nvme2n1 : 5.66 135.66 8.48 0.00 0.00 860473.01 71889.27 890144.87 00:09:23.863 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x8000 length 0x8000 00:09:23.863 Nvme2n1 : 5.82 135.91 8.49 0.00 0.00 851209.89 53344.59 1553174.58 00:09:23.863 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0x8000 00:09:23.863 Nvme2n2 : 5.74 139.10 8.69 0.00 0.00 817597.73 70973.48 904797.46 00:09:23.863 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x8000 length 0x8000 00:09:23.863 Nvme2n2 : 5.85 139.71 8.73 0.00 0.00 804977.52 34570.96 1575153.47 00:09:23.863 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0x8000 00:09:23.863 Nvme2n3 : 5.83 149.84 9.36 0.00 0.00 742722.78 24382.83 923113.19 00:09:23.863 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x8000 length 0x8000 00:09:23.863 Nvme2n3 : 5.88 144.29 9.02 0.00 0.00 759392.96 35944.64 1604458.65 00:09:23.863 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x0 length 0x2000 00:09:23.863 Nvme3n1 : 5.83 164.54 10.28 0.00 0.00 664589.41 3076.47 945092.08 00:09:23.863 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.863 Verification LBA range: start 0x2000 length 0x2000 00:09:23.863 Nvme3n1 : 5.89 159.85 9.99 0.00 0.00 676132.84 2203.61 1626437.53 00:09:23.863 [2024-11-19T08:30:45.770Z] =================================================================================================================== 00:09:23.863 [2024-11-19T08:30:45.770Z] Total : 1939.55 121.22 0.00 0.00 834653.24 2203.61 1992752.29 00:09:24.432 00:09:24.432 real 0m7.433s 00:09:24.432 user 0m14.054s 00:09:24.432 sys 0m0.274s 00:09:24.432 08:30:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.432 08:30:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:24.432 ************************************ 00:09:24.432 END TEST bdev_verify_big_io 00:09:24.432 ************************************ 00:09:24.692 08:30:46 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.692 08:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:24.692 08:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.692 08:30:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.692 ************************************ 00:09:24.692 START TEST bdev_write_zeroes 00:09:24.692 ************************************ 00:09:24.692 08:30:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.692 [2024-11-19 08:30:46.447272] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:24.692 [2024-11-19 08:30:46.447423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74830 ] 00:09:24.952 [2024-11-19 08:30:46.596438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.952 [2024-11-19 08:30:46.626524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.210 Running I/O for 1 seconds... 00:09:26.589 57280.00 IOPS, 223.75 MiB/s 00:09:26.589 Latency(us) 00:09:26.589 [2024-11-19T08:30:48.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.589 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme0n1 : 1.03 8155.55 31.86 0.00 0.00 15640.51 13221.67 31823.59 00:09:26.589 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme1n1p1 : 1.03 8144.69 31.82 0.00 0.00 15637.10 13336.15 34113.06 00:09:26.589 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme1n1p2 : 1.03 8133.74 31.77 0.00 0.00 15532.20 12706.54 29076.23 00:09:26.589 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme2n1 : 1.03 8165.31 31.90 0.00 0.00 15457.50 9615.76 26328.87 00:09:26.589 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme2n2 : 1.04 8155.68 31.86 0.00 0.00 15427.39 9959.18 25756.51 00:09:26.589 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme2n3 : 1.04 8146.07 31.82 0.00 0.00 15401.44 9673.00 25870.98 00:09:26.589 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.589 Nvme3n1 : 1.03 8052.39 31.45 0.00 0.00 15539.88 10989.44 30220.97 00:09:26.589 [2024-11-19T08:30:48.496Z] =================================================================================================================== 00:09:26.589 [2024-11-19T08:30:48.496Z] Total : 56953.43 222.47 0.00 0.00 15519.11 9615.76 34113.06 00:09:26.589 00:09:26.589 real 0m2.055s 00:09:26.589 user 0m1.748s 00:09:26.589 sys 0m0.198s 00:09:26.589 08:30:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.589 08:30:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:26.589 ************************************ 00:09:26.589 END TEST bdev_write_zeroes 00:09:26.589 ************************************ 00:09:26.589 08:30:48 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.589 08:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:26.589 08:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.589 08:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.849 ************************************ 00:09:26.849 START TEST bdev_json_nonenclosed 00:09:26.849 ************************************ 00:09:26.849 08:30:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.849 [2024-11-19 08:30:48.576268] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:26.849 [2024-11-19 08:30:48.576412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74872 ] 00:09:26.849 [2024-11-19 08:30:48.733784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.108 [2024-11-19 08:30:48.778759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.108 [2024-11-19 08:30:48.778881] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:27.108 [2024-11-19 08:30:48.778908] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:27.108 [2024-11-19 08:30:48.778922] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.108 00:09:27.108 real 0m0.404s 00:09:27.108 user 0m0.191s 00:09:27.108 sys 0m0.110s 00:09:27.108 08:30:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.108 08:30:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:27.108 ************************************ 00:09:27.108 END TEST bdev_json_nonenclosed 00:09:27.108 ************************************ 00:09:27.108 08:30:48 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.108 08:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:27.108 08:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.108 08:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.108 ************************************ 00:09:27.108 START TEST bdev_json_nonarray 00:09:27.108 ************************************ 00:09:27.108 08:30:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.368 [2024-11-19 08:30:49.045862] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:27.368 [2024-11-19 08:30:49.045996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74897 ] 00:09:27.368 [2024-11-19 08:30:49.202825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.368 [2024-11-19 08:30:49.248852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.368 [2024-11-19 08:30:49.248994] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:27.368 [2024-11-19 08:30:49.249015] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:27.368 [2024-11-19 08:30:49.249040] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.628 00:09:27.628 real 0m0.410s 00:09:27.628 user 0m0.182s 00:09:27.628 sys 0m0.123s 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 ************************************ 00:09:27.628 END TEST bdev_json_nonarray 00:09:27.628 ************************************ 00:09:27.628 08:30:49 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:27.628 08:30:49 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:27.628 08:30:49 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:27.628 08:30:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.628 08:30:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.628 08:30:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 ************************************ 00:09:27.628 START TEST bdev_gpt_uuid 00:09:27.628 ************************************ 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74923 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74923 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74923 ']' 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 08:30:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:27.888 [2024-11-19 08:30:49.533403] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:27.888 [2024-11-19 08:30:49.533556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74923 ] 00:09:27.888 [2024-11-19 08:30:49.672194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.888 [2024-11-19 08:30:49.717936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.827 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.827 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:28.827 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:28.827 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.827 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:29.088 Some configs were skipped because the RPC state that can call them passed over. 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:29.088 { 00:09:29.088 "name": "Nvme1n1p1", 00:09:29.088 "aliases": [ 00:09:29.088 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:29.088 ], 00:09:29.088 "product_name": "GPT Disk", 00:09:29.088 "block_size": 4096, 00:09:29.088 "num_blocks": 655104, 00:09:29.088 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:29.088 "assigned_rate_limits": { 00:09:29.088 "rw_ios_per_sec": 0, 00:09:29.088 "rw_mbytes_per_sec": 0, 00:09:29.088 "r_mbytes_per_sec": 0, 00:09:29.088 "w_mbytes_per_sec": 0 00:09:29.088 }, 00:09:29.088 "claimed": false, 00:09:29.088 "zoned": false, 00:09:29.088 "supported_io_types": { 00:09:29.088 "read": true, 00:09:29.088 "write": true, 00:09:29.088 "unmap": true, 00:09:29.088 "flush": true, 00:09:29.088 "reset": true, 00:09:29.088 "nvme_admin": false, 00:09:29.088 "nvme_io": false, 00:09:29.088 "nvme_io_md": false, 00:09:29.088 "write_zeroes": true, 00:09:29.088 "zcopy": false, 00:09:29.088 "get_zone_info": false, 00:09:29.088 "zone_management": false, 00:09:29.088 "zone_append": false, 00:09:29.088 "compare": true, 00:09:29.088 "compare_and_write": false, 00:09:29.088 "abort": true, 00:09:29.088 "seek_hole": false, 00:09:29.088 "seek_data": false, 00:09:29.088 "copy": true, 00:09:29.088 "nvme_iov_md": false 00:09:29.088 }, 00:09:29.088 "driver_specific": { 00:09:29.088 "gpt": { 00:09:29.088 "base_bdev": "Nvme1n1", 00:09:29.088 "offset_blocks": 256, 00:09:29.088 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:29.088 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:29.088 "partition_name": "SPDK_TEST_first" 00:09:29.088 } 00:09:29.088 } 00:09:29.088 } 00:09:29.088 ]' 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:29.088 { 00:09:29.088 "name": "Nvme1n1p2", 00:09:29.088 "aliases": [ 00:09:29.088 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:29.088 ], 00:09:29.088 "product_name": "GPT Disk", 00:09:29.088 "block_size": 4096, 00:09:29.088 "num_blocks": 655103, 00:09:29.088 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:29.088 "assigned_rate_limits": { 00:09:29.088 "rw_ios_per_sec": 0, 00:09:29.088 "rw_mbytes_per_sec": 0, 00:09:29.088 "r_mbytes_per_sec": 0, 00:09:29.088 "w_mbytes_per_sec": 0 00:09:29.088 }, 00:09:29.088 "claimed": false, 00:09:29.088 "zoned": false, 00:09:29.088 "supported_io_types": { 00:09:29.088 "read": true, 00:09:29.088 "write": true, 00:09:29.088 "unmap": true, 00:09:29.088 "flush": true, 00:09:29.088 "reset": true, 00:09:29.088 "nvme_admin": false, 00:09:29.088 "nvme_io": false, 00:09:29.088 "nvme_io_md": false, 00:09:29.088 "write_zeroes": true, 00:09:29.088 "zcopy": false, 00:09:29.088 "get_zone_info": false, 00:09:29.088 "zone_management": false, 00:09:29.088 "zone_append": false, 00:09:29.088 "compare": true, 00:09:29.088 "compare_and_write": false, 00:09:29.088 "abort": true, 00:09:29.088 "seek_hole": false, 00:09:29.088 "seek_data": false, 00:09:29.088 "copy": true, 00:09:29.088 "nvme_iov_md": false 00:09:29.088 }, 00:09:29.088 "driver_specific": { 00:09:29.088 "gpt": { 00:09:29.088 "base_bdev": "Nvme1n1", 00:09:29.088 "offset_blocks": 655360, 00:09:29.088 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:29.088 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:29.088 "partition_name": "SPDK_TEST_second" 00:09:29.088 } 00:09:29.088 } 00:09:29.088 } 00:09:29.088 ]' 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:29.088 08:30:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74923 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74923 ']' 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74923 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74923 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.349 killing process with pid 74923 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74923' 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74923 00:09:29.349 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74923 00:09:29.609 00:09:29.609 real 0m2.023s 00:09:29.609 user 0m2.078s 00:09:29.609 sys 0m0.598s 00:09:29.609 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.609 ************************************ 00:09:29.609 END TEST bdev_gpt_uuid 00:09:29.609 ************************************ 00:09:29.609 08:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:29.869 08:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.387 Waiting for block devices as requested 00:09:30.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.647 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.647 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.907 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.204 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.204 08:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:36.204 08:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:36.204 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:36.204 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:36.204 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:36.204 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:36.204 08:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:36.204 00:09:36.204 real 0m51.755s 00:09:36.204 user 1m4.733s 00:09:36.204 sys 0m10.756s 00:09:36.204 08:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.204 08:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.205 ************************************ 00:09:36.205 END TEST blockdev_nvme_gpt 00:09:36.205 ************************************ 00:09:36.205 08:30:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:36.205 08:30:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.205 08:30:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.205 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:36.205 ************************************ 00:09:36.205 START TEST nvme 00:09:36.205 ************************************ 00:09:36.205 08:30:58 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:36.464 * Looking for test storage... 00:09:36.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.464 08:30:58 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.464 08:30:58 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.464 08:30:58 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.464 08:30:58 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.464 08:30:58 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.464 08:30:58 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.464 08:30:58 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.464 08:30:58 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.464 08:30:58 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.464 08:30:58 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.464 08:30:58 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.464 08:30:58 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.464 08:30:58 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.464 08:30:58 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.464 08:30:58 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.464 08:30:58 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:36.465 08:30:58 nvme -- scripts/common.sh@345 -- # : 1 00:09:36.465 08:30:58 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.465 08:30:58 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.465 08:30:58 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:36.465 08:30:58 nvme -- scripts/common.sh@353 -- # local d=1 00:09:36.465 08:30:58 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.465 08:30:58 nvme -- scripts/common.sh@355 -- # echo 1 00:09:36.465 08:30:58 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.465 08:30:58 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:36.465 08:30:58 nvme -- scripts/common.sh@353 -- # local d=2 00:09:36.465 08:30:58 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.465 08:30:58 nvme -- scripts/common.sh@355 -- # echo 2 00:09:36.465 08:30:58 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.465 08:30:58 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.465 08:30:58 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.465 08:30:58 nvme -- scripts/common.sh@368 -- # return 0 00:09:36.465 08:30:58 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.465 08:30:58 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.465 --rc genhtml_branch_coverage=1 00:09:36.465 --rc genhtml_function_coverage=1 00:09:36.465 --rc genhtml_legend=1 00:09:36.465 --rc geninfo_all_blocks=1 00:09:36.465 --rc geninfo_unexecuted_blocks=1 00:09:36.465 00:09:36.465 ' 00:09:36.465 08:30:58 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.465 --rc genhtml_branch_coverage=1 00:09:36.465 --rc genhtml_function_coverage=1 00:09:36.465 --rc genhtml_legend=1 00:09:36.465 --rc geninfo_all_blocks=1 00:09:36.465 --rc geninfo_unexecuted_blocks=1 00:09:36.465 00:09:36.465 ' 00:09:36.465 08:30:58 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.465 --rc genhtml_branch_coverage=1 00:09:36.465 --rc genhtml_function_coverage=1 00:09:36.465 --rc genhtml_legend=1 00:09:36.465 --rc geninfo_all_blocks=1 00:09:36.465 --rc geninfo_unexecuted_blocks=1 00:09:36.465 00:09:36.465 ' 00:09:36.465 08:30:58 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.465 --rc genhtml_branch_coverage=1 00:09:36.465 --rc genhtml_function_coverage=1 00:09:36.465 --rc genhtml_legend=1 00:09:36.465 --rc geninfo_all_blocks=1 00:09:36.465 --rc geninfo_unexecuted_blocks=1 00:09:36.465 00:09:36.465 ' 00:09:36.465 08:30:58 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.973 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.973 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.973 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.973 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.973 08:30:59 nvme -- nvme/nvme.sh@79 -- # uname 00:09:37.973 08:30:59 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:37.973 08:30:59 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:37.973 08:30:59 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1075 -- # stubpid=75561 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:37.974 Waiting for stub to ready for secondary processes... 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/75561 ]] 00:09:37.974 08:30:59 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:37.974 [2024-11-19 08:30:59.810418] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:37.974 [2024-11-19 08:30:59.810584] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:38.913 08:31:00 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:38.913 08:31:00 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/75561 ]] 00:09:38.913 08:31:00 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:38.913 [2024-11-19 08:31:00.778160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.913 [2024-11-19 08:31:00.798273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.913 [2024-11-19 08:31:00.798374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.913 [2024-11-19 08:31:00.798486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.913 [2024-11-19 08:31:00.808979] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:38.913 [2024-11-19 08:31:00.809025] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:38.913 [2024-11-19 08:31:00.816733] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:38.913 [2024-11-19 08:31:00.816944] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:39.173 [2024-11-19 08:31:00.817567] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:39.173 [2024-11-19 08:31:00.817746] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:39.173 [2024-11-19 08:31:00.817814] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:39.173 [2024-11-19 08:31:00.818523] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:39.173 [2024-11-19 08:31:00.818729] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:39.173 [2024-11-19 08:31:00.818785] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:39.173 [2024-11-19 08:31:00.819444] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:39.173 [2024-11-19 08:31:00.819617] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:39.173 [2024-11-19 08:31:00.819698] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:39.173 [2024-11-19 08:31:00.819779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:39.173 [2024-11-19 08:31:00.819829] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:40.111 08:31:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:40.111 08:31:01 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:40.111 done. 00:09:40.111 08:31:01 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:40.111 08:31:01 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:40.111 08:31:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.111 08:31:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.111 ************************************ 00:09:40.111 START TEST nvme_reset 00:09:40.111 ************************************ 00:09:40.111 08:31:01 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:40.111 Initializing NVMe Controllers 00:09:40.111 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:40.111 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:40.111 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:40.111 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:40.111 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:40.111 00:09:40.111 real 0m0.218s 00:09:40.111 user 0m0.075s 00:09:40.111 sys 0m0.105s 00:09:40.111 08:31:02 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.111 08:31:02 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:40.111 ************************************ 00:09:40.111 END TEST nvme_reset 00:09:40.111 ************************************ 00:09:40.370 08:31:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:40.370 08:31:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.370 08:31:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.370 08:31:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 ************************************ 00:09:40.370 START TEST nvme_identify 00:09:40.370 ************************************ 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:40.370 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:40.370 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:40.370 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:40.370 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:40.370 08:31:02 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:40.370 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:40.633 [2024-11-19 08:31:02.340202] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 75594 terminated unexpected 00:09:40.633 ===================================================== 00:09:40.633 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:40.633 ===================================================== 00:09:40.633 Controller Capabilities/Features 00:09:40.633 ================================ 00:09:40.633 Vendor ID: 1b36 00:09:40.633 Subsystem Vendor ID: 1af4 00:09:40.633 Serial Number: 12340 00:09:40.633 Model Number: QEMU NVMe Ctrl 00:09:40.633 Firmware Version: 8.0.0 00:09:40.633 Recommended Arb Burst: 6 00:09:40.633 IEEE OUI Identifier: 00 54 52 00:09:40.633 Multi-path I/O 00:09:40.633 May have multiple subsystem ports: No 00:09:40.633 May have multiple controllers: No 00:09:40.633 Associated with SR-IOV VF: No 00:09:40.633 Max Data Transfer Size: 524288 00:09:40.633 Max Number of Namespaces: 256 00:09:40.633 Max Number of I/O Queues: 64 00:09:40.633 NVMe Specification Version (VS): 1.4 00:09:40.633 NVMe Specification Version (Identify): 1.4 00:09:40.633 Maximum Queue Entries: 2048 00:09:40.633 Contiguous Queues Required: Yes 00:09:40.633 Arbitration Mechanisms Supported 00:09:40.633 Weighted Round Robin: Not Supported 00:09:40.633 Vendor Specific: Not Supported 00:09:40.633 Reset Timeout: 7500 ms 00:09:40.633 Doorbell Stride: 4 bytes 00:09:40.633 NVM Subsystem Reset: Not Supported 00:09:40.633 Command Sets Supported 00:09:40.633 NVM Command Set: Supported 00:09:40.633 Boot Partition: Not Supported 00:09:40.633 Memory Page Size Minimum: 4096 bytes 00:09:40.633 Memory Page Size Maximum: 65536 bytes 00:09:40.633 Persistent Memory Region: Not Supported 00:09:40.633 Optional Asynchronous Events Supported 00:09:40.633 Namespace Attribute Notices: Supported 00:09:40.633 Firmware Activation Notices: Not Supported 00:09:40.633 ANA Change Notices: Not Supported 00:09:40.633 PLE Aggregate Log Change Notices: Not Supported 00:09:40.633 LBA Status Info Alert Notices: Not Supported 00:09:40.633 EGE Aggregate Log Change Notices: Not Supported 00:09:40.633 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.633 Zone Descriptor Change Notices: Not Supported 00:09:40.633 Discovery Log Change Notices: Not Supported 00:09:40.633 Controller Attributes 00:09:40.633 128-bit Host Identifier: Not Supported 00:09:40.633 Non-Operational Permissive Mode: Not Supported 00:09:40.633 NVM Sets: Not Supported 00:09:40.633 Read Recovery Levels: Not Supported 00:09:40.633 Endurance Groups: Not Supported 00:09:40.633 Predictable Latency Mode: Not Supported 00:09:40.633 Traffic Based Keep ALive: Not Supported 00:09:40.633 Namespace Granularity: Not Supported 00:09:40.633 SQ Associations: Not Supported 00:09:40.633 UUID List: Not Supported 00:09:40.633 Multi-Domain Subsystem: Not Supported 00:09:40.633 Fixed Capacity Management: Not Supported 00:09:40.633 Variable Capacity Management: Not Supported 00:09:40.633 Delete Endurance Group: Not Supported 00:09:40.633 Delete NVM Set: Not Supported 00:09:40.633 Extended LBA Formats Supported: Supported 00:09:40.633 Flexible Data Placement Supported: Not Supported 00:09:40.633 00:09:40.633 Controller Memory Buffer Support 00:09:40.633 ================================ 00:09:40.633 Supported: No 00:09:40.633 00:09:40.633 Persistent Memory Region Support 00:09:40.633 ================================ 00:09:40.633 Supported: No 00:09:40.633 00:09:40.633 Admin Command Set Attributes 00:09:40.633 ============================ 00:09:40.633 Security Send/Receive: Not Supported 00:09:40.633 Format NVM: Supported 00:09:40.633 Firmware Activate/Download: Not Supported 00:09:40.634 Namespace Management: Supported 00:09:40.634 Device Self-Test: Not Supported 00:09:40.634 Directives: Supported 00:09:40.634 NVMe-MI: Not Supported 00:09:40.634 Virtualization Management: Not Supported 00:09:40.634 Doorbell Buffer Config: Supported 00:09:40.634 Get LBA Status Capability: Not Supported 00:09:40.634 Command & Feature Lockdown Capability: Not Supported 00:09:40.634 Abort Command Limit: 4 00:09:40.634 Async Event Request Limit: 4 00:09:40.634 Number of Firmware Slots: N/A 00:09:40.634 Firmware Slot 1 Read-Only: N/A 00:09:40.634 Firmware Activation Without Reset: N/A 00:09:40.634 Multiple Update Detection Support: N/A 00:09:40.634 Firmware Update Granularity: No Information Provided 00:09:40.634 Per-Namespace SMART Log: Yes 00:09:40.634 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.634 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:40.634 Command Effects Log Page: Supported 00:09:40.634 Get Log Page Extended Data: Supported 00:09:40.634 Telemetry Log Pages: Not Supported 00:09:40.634 Persistent Event Log Pages: Not Supported 00:09:40.634 Supported Log Pages Log Page: May Support 00:09:40.634 Commands Supported & Effects Log Page: Not Supported 00:09:40.634 Feature Identifiers & Effects Log Page:May Support 00:09:40.634 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.634 Data Area 4 for Telemetry Log: Not Supported 00:09:40.634 Error Log Page Entries Supported: 1 00:09:40.634 Keep Alive: Not Supported 00:09:40.634 00:09:40.634 NVM Command Set Attributes 00:09:40.634 ========================== 00:09:40.634 Submission Queue Entry Size 00:09:40.634 Max: 64 00:09:40.634 Min: 64 00:09:40.634 Completion Queue Entry Size 00:09:40.634 Max: 16 00:09:40.634 Min: 16 00:09:40.634 Number of Namespaces: 256 00:09:40.634 Compare Command: Supported 00:09:40.634 Write Uncorrectable Command: Not Supported 00:09:40.634 Dataset Management Command: Supported 00:09:40.634 Write Zeroes Command: Supported 00:09:40.634 Set Features Save Field: Supported 00:09:40.634 Reservations: Not Supported 00:09:40.634 Timestamp: Supported 00:09:40.634 Copy: Supported 00:09:40.634 Volatile Write Cache: Present 00:09:40.634 Atomic Write Unit (Normal): 1 00:09:40.634 Atomic Write Unit (PFail): 1 00:09:40.634 Atomic Compare & Write Unit: 1 00:09:40.634 Fused Compare & Write: Not Supported 00:09:40.634 Scatter-Gather List 00:09:40.634 SGL Command Set: Supported 00:09:40.634 SGL Keyed: Not Supported 00:09:40.634 SGL Bit Bucket Descriptor: Not Supported 00:09:40.634 SGL Metadata Pointer: Not Supported 00:09:40.634 Oversized SGL: Not Supported 00:09:40.634 SGL Metadata Address: Not Supported 00:09:40.634 SGL Offset: Not Supported 00:09:40.634 Transport SGL Data Block: Not Supported 00:09:40.634 Replay Protected Memory Block: Not Supported 00:09:40.634 00:09:40.634 Firmware Slot Information 00:09:40.634 ========================= 00:09:40.634 Active slot: 1 00:09:40.634 Slot 1 Firmware Revision: 1.0 00:09:40.634 00:09:40.634 00:09:40.634 Commands Supported and Effects 00:09:40.634 ============================== 00:09:40.634 Admin Commands 00:09:40.634 -------------- 00:09:40.634 Delete I/O Submission Queue (00h): Supported 00:09:40.634 Create I/O Submission Queue (01h): Supported 00:09:40.634 Get Log Page (02h): Supported 00:09:40.634 Delete I/O Completion Queue (04h): Supported 00:09:40.634 Create I/O Completion Queue (05h): Supported 00:09:40.634 Identify (06h): Supported 00:09:40.634 Abort (08h): Supported 00:09:40.634 Set Features (09h): Supported 00:09:40.634 Get Features (0Ah): Supported 00:09:40.634 Asynchronous Event Request (0Ch): Supported 00:09:40.634 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.634 Directive Send (19h): Supported 00:09:40.634 Directive Receive (1Ah): Supported 00:09:40.634 Virtualization Management (1Ch): Supported 00:09:40.634 Doorbell Buffer Config (7Ch): Supported 00:09:40.634 Format NVM (80h): Supported LBA-Change 00:09:40.634 I/O Commands 00:09:40.634 ------------ 00:09:40.634 Flush (00h): Supported LBA-Change 00:09:40.634 Write (01h): Supported LBA-Change 00:09:40.634 Read (02h): Supported 00:09:40.634 Compare (05h): Supported 00:09:40.634 Write Zeroes (08h): Supported LBA-Change 00:09:40.634 Dataset Management (09h): Supported LBA-Change 00:09:40.634 Unknown (0Ch): Supported 00:09:40.634 Unknown (12h): Supported 00:09:40.634 Copy (19h): Supported LBA-Change 00:09:40.634 Unknown (1Dh): Supported LBA-Change 00:09:40.634 00:09:40.634 Error Log 00:09:40.634 ========= 00:09:40.634 00:09:40.634 Arbitration 00:09:40.634 =========== 00:09:40.634 Arbitration Burst: no limit 00:09:40.634 00:09:40.634 Power Management 00:09:40.634 ================ 00:09:40.634 Number of Power States: 1 00:09:40.635 Current Power State: Power State #0 00:09:40.635 Power State #0: 00:09:40.635 Max Power: 25.00 W 00:09:40.635 Non-Operational State: Operational 00:09:40.635 Entry Latency: 16 microseconds 00:09:40.635 Exit Latency: 4 microseconds 00:09:40.635 Relative Read Throughput: 0 00:09:40.635 Relative Read Latency: 0 00:09:40.635 Relative Write Throughput: 0 00:09:40.635 Relative Write Latency: 0 00:09:40.635 Idle Power[2024-11-19 08:31:02.341169] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 75594 terminated unexpected 00:09:40.635 : Not Reported 00:09:40.635 Active Power: Not Reported 00:09:40.635 Non-Operational Permissive Mode: Not Supported 00:09:40.635 00:09:40.635 Health Information 00:09:40.635 ================== 00:09:40.635 Critical Warnings: 00:09:40.635 Available Spare Space: OK 00:09:40.635 Temperature: OK 00:09:40.635 Device Reliability: OK 00:09:40.635 Read Only: No 00:09:40.635 Volatile Memory Backup: OK 00:09:40.635 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.635 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.635 Available Spare: 0% 00:09:40.635 Available Spare Threshold: 0% 00:09:40.635 Life Percentage Used: 0% 00:09:40.635 Data Units Read: 731 00:09:40.635 Data Units Written: 659 00:09:40.635 Host Read Commands: 33868 00:09:40.635 Host Write Commands: 33654 00:09:40.635 Controller Busy Time: 0 minutes 00:09:40.635 Power Cycles: 0 00:09:40.635 Power On Hours: 0 hours 00:09:40.635 Unsafe Shutdowns: 0 00:09:40.635 Unrecoverable Media Errors: 0 00:09:40.635 Lifetime Error Log Entries: 0 00:09:40.635 Warning Temperature Time: 0 minutes 00:09:40.635 Critical Temperature Time: 0 minutes 00:09:40.635 00:09:40.635 Number of Queues 00:09:40.635 ================ 00:09:40.635 Number of I/O Submission Queues: 64 00:09:40.635 Number of I/O Completion Queues: 64 00:09:40.635 00:09:40.635 ZNS Specific Controller Data 00:09:40.635 ============================ 00:09:40.635 Zone Append Size Limit: 0 00:09:40.635 00:09:40.635 00:09:40.635 Active Namespaces 00:09:40.635 ================= 00:09:40.635 Namespace ID:1 00:09:40.635 Error Recovery Timeout: Unlimited 00:09:40.635 Command Set Identifier: NVM (00h) 00:09:40.635 Deallocate: Supported 00:09:40.635 Deallocated/Unwritten Error: Supported 00:09:40.635 Deallocated Read Value: All 0x00 00:09:40.635 Deallocate in Write Zeroes: Not Supported 00:09:40.635 Deallocated Guard Field: 0xFFFF 00:09:40.635 Flush: Supported 00:09:40.635 Reservation: Not Supported 00:09:40.635 Metadata Transferred as: Separate Metadata Buffer 00:09:40.635 Namespace Sharing Capabilities: Private 00:09:40.635 Size (in LBAs): 1548666 (5GiB) 00:09:40.635 Capacity (in LBAs): 1548666 (5GiB) 00:09:40.635 Utilization (in LBAs): 1548666 (5GiB) 00:09:40.635 Thin Provisioning: Not Supported 00:09:40.635 Per-NS Atomic Units: No 00:09:40.635 Maximum Single Source Range Length: 128 00:09:40.635 Maximum Copy Length: 128 00:09:40.635 Maximum Source Range Count: 128 00:09:40.635 NGUID/EUI64 Never Reused: No 00:09:40.635 Namespace Write Protected: No 00:09:40.635 Number of LBA Formats: 8 00:09:40.635 Current LBA Format: LBA Format #07 00:09:40.635 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.635 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.635 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.635 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.635 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.635 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.635 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.635 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.635 00:09:40.635 NVM Specific Namespace Data 00:09:40.635 =========================== 00:09:40.635 Logical Block Storage Tag Mask: 0 00:09:40.635 Protection Information Capabilities: 00:09:40.635 16b Guard Protection Information Storage Tag Support: No 00:09:40.635 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.635 Storage Tag Check Read Support: No 00:09:40.635 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.635 ===================================================== 00:09:40.635 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:40.635 ===================================================== 00:09:40.635 Controller Capabilities/Features 00:09:40.635 ================================ 00:09:40.635 Vendor ID: 1b36 00:09:40.635 Subsystem Vendor ID: 1af4 00:09:40.635 Serial Number: 12341 00:09:40.635 Model Number: QEMU NVMe Ctrl 00:09:40.636 Firmware Version: 8.0.0 00:09:40.636 Recommended Arb Burst: 6 00:09:40.636 IEEE OUI Identifier: 00 54 52 00:09:40.636 Multi-path I/O 00:09:40.636 May have multiple subsystem ports: No 00:09:40.636 May have multiple controllers: No 00:09:40.636 Associated with SR-IOV VF: No 00:09:40.636 Max Data Transfer Size: 524288 00:09:40.636 Max Number of Namespaces: 256 00:09:40.636 Max Number of I/O Queues: 64 00:09:40.636 NVMe Specification Version (VS): 1.4 00:09:40.636 NVMe Specification Version (Identify): 1.4 00:09:40.636 Maximum Queue Entries: 2048 00:09:40.636 Contiguous Queues Required: Yes 00:09:40.636 Arbitration Mechanisms Supported 00:09:40.636 Weighted Round Robin: Not Supported 00:09:40.636 Vendor Specific: Not Supported 00:09:40.636 Reset Timeout: 7500 ms 00:09:40.636 Doorbell Stride: 4 bytes 00:09:40.636 NVM Subsystem Reset: Not Supported 00:09:40.636 Command Sets Supported 00:09:40.636 NVM Command Set: Supported 00:09:40.636 Boot Partition: Not Supported 00:09:40.636 Memory Page Size Minimum: 4096 bytes 00:09:40.636 Memory Page Size Maximum: 65536 bytes 00:09:40.636 Persistent Memory Region: Not Supported 00:09:40.636 Optional Asynchronous Events Supported 00:09:40.636 Namespace Attribute Notices: Supported 00:09:40.636 Firmware Activation Notices: Not Supported 00:09:40.636 ANA Change Notices: Not Supported 00:09:40.636 PLE Aggregate Log Change Notices: Not Supported 00:09:40.636 LBA Status Info Alert Notices: Not Supported 00:09:40.636 EGE Aggregate Log Change Notices: Not Supported 00:09:40.636 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.636 Zone Descriptor Change Notices: Not Supported 00:09:40.636 Discovery Log Change Notices: Not Supported 00:09:40.636 Controller Attributes 00:09:40.636 128-bit Host Identifier: Not Supported 00:09:40.636 Non-Operational Permissive Mode: Not Supported 00:09:40.636 NVM Sets: Not Supported 00:09:40.636 Read Recovery Levels: Not Supported 00:09:40.636 Endurance Groups: Not Supported 00:09:40.636 Predictable Latency Mode: Not Supported 00:09:40.636 Traffic Based Keep ALive: Not Supported 00:09:40.636 Namespace Granularity: Not Supported 00:09:40.636 SQ Associations: Not Supported 00:09:40.636 UUID List: Not Supported 00:09:40.636 Multi-Domain Subsystem: Not Supported 00:09:40.636 Fixed Capacity Management: Not Supported 00:09:40.636 Variable Capacity Management: Not Supported 00:09:40.636 Delete Endurance Group: Not Supported 00:09:40.636 Delete NVM Set: Not Supported 00:09:40.636 Extended LBA Formats Supported: Supported 00:09:40.636 Flexible Data Placement Supported: Not Supported 00:09:40.636 00:09:40.636 Controller Memory Buffer Support 00:09:40.636 ================================ 00:09:40.636 Supported: No 00:09:40.636 00:09:40.636 Persistent Memory Region Support 00:09:40.636 ================================ 00:09:40.636 Supported: No 00:09:40.636 00:09:40.636 Admin Command Set Attributes 00:09:40.636 ============================ 00:09:40.636 Security Send/Receive: Not Supported 00:09:40.636 Format NVM: Supported 00:09:40.636 Firmware Activate/Download: Not Supported 00:09:40.636 Namespace Management: Supported 00:09:40.636 Device Self-Test: Not Supported 00:09:40.636 Directives: Supported 00:09:40.636 NVMe-MI: Not Supported 00:09:40.636 Virtualization Management: Not Supported 00:09:40.636 Doorbell Buffer Config: Supported 00:09:40.636 Get LBA Status Capability: Not Supported 00:09:40.636 Command & Feature Lockdown Capability: Not Supported 00:09:40.636 Abort Command Limit: 4 00:09:40.636 Async Event Request Limit: 4 00:09:40.636 Number of Firmware Slots: N/A 00:09:40.636 Firmware Slot 1 Read-Only: N/A 00:09:40.636 Firmware Activation Without Reset: N/A 00:09:40.636 Multiple Update Detection Support: N/A 00:09:40.636 Firmware Update Granularity: No Information Provided 00:09:40.636 Per-Namespace SMART Log: Yes 00:09:40.636 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.636 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:40.636 Command Effects Log Page: Supported 00:09:40.636 Get Log Page Extended Data: Supported 00:09:40.636 Telemetry Log Pages: Not Supported 00:09:40.636 Persistent Event Log Pages: Not Supported 00:09:40.636 Supported Log Pages Log Page: May Support 00:09:40.636 Commands Supported & Effects Log Page: Not Supported 00:09:40.636 Feature Identifiers & Effects Log Page:May Support 00:09:40.636 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.636 Data Area 4 for Telemetry Log: Not Supported 00:09:40.636 Error Log Page Entries Supported: 1 00:09:40.636 Keep Alive: Not Supported 00:09:40.636 00:09:40.636 NVM Command Set Attributes 00:09:40.636 ========================== 00:09:40.636 Submission Queue Entry Size 00:09:40.636 Max: 64 00:09:40.636 Min: 64 00:09:40.636 Completion Queue Entry Size 00:09:40.636 Max: 16 00:09:40.636 Min: 16 00:09:40.636 Number of Namespaces: 256 00:09:40.636 Compare Command: Supported 00:09:40.636 Write Uncorrectable Command: Not Supported 00:09:40.636 Dataset Management Command: Supported 00:09:40.636 Write Zeroes Command: Supported 00:09:40.636 Set Features Save Field: Supported 00:09:40.636 Reservations: Not Supported 00:09:40.636 Timestamp: Supported 00:09:40.636 Copy: Supported 00:09:40.636 Volatile Write Cache: Present 00:09:40.636 Atomic Write Unit (Normal): 1 00:09:40.636 Atomic Write Unit (PFail): 1 00:09:40.636 Atomic Compare & Write Unit: 1 00:09:40.637 Fused Compare & Write: Not Supported 00:09:40.637 Scatter-Gather List 00:09:40.637 SGL Command Set: Supported 00:09:40.637 SGL Keyed: Not Supported 00:09:40.637 SGL Bit Bucket Descriptor: Not Supported 00:09:40.637 SGL Metadata Pointer: Not Supported 00:09:40.637 Oversized SGL: Not Supported 00:09:40.637 SGL Metadata Address: Not Supported 00:09:40.637 SGL Offset: Not Supported 00:09:40.637 Transport SGL Data Block: Not Supported 00:09:40.637 Replay Protected Memory Block: Not Supported 00:09:40.637 00:09:40.637 Firmware Slot Information 00:09:40.637 ========================= 00:09:40.637 Active slot: 1 00:09:40.637 Slot 1 Firmware Revision: 1.0 00:09:40.637 00:09:40.637 00:09:40.637 Commands Supported and Effects 00:09:40.637 ============================== 00:09:40.637 Admin Commands 00:09:40.637 -------------- 00:09:40.637 Delete I/O Submission Queue (00h): Supported 00:09:40.637 Create I/O Submission Queue (01h): Supported 00:09:40.637 Get Log Page (02h): Supported 00:09:40.637 Delete I/O Completion Queue (04h): Supported 00:09:40.637 Create I/O Completion Queue (05h): Supported 00:09:40.637 Identify (06h): Supported 00:09:40.637 Abort (08h): Supported 00:09:40.637 Set Features (09h): Supported 00:09:40.637 Get Features (0Ah): Supported 00:09:40.637 Asynchronous Event Request (0Ch): Supported 00:09:40.637 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.637 Directive Send (19h): Supported 00:09:40.637 Directive Receive (1Ah): Supported 00:09:40.637 Virtualization Management (1Ch): Supported 00:09:40.637 Doorbell Buffer Config (7Ch): Supported 00:09:40.637 Format NVM (80h): Supported LBA-Change 00:09:40.637 I/O Commands 00:09:40.637 ------------ 00:09:40.637 Flush (00h): Supported LBA-Change 00:09:40.637 Write (01h): Supported LBA-Change 00:09:40.637 Read (02h): Supported 00:09:40.637 Compare (05h): Supported 00:09:40.637 Write Zeroes (08h): Supported LBA-Change 00:09:40.637 Dataset Management (09h): Supported LBA-Change 00:09:40.637 Unknown (0Ch): Supported 00:09:40.637 Unknown (12h): Supported 00:09:40.637 Copy (19h): Supported LBA-Change 00:09:40.637 Unknown (1Dh): Supported LBA-Change 00:09:40.637 00:09:40.637 Error Log 00:09:40.637 ========= 00:09:40.637 00:09:40.637 Arbitration 00:09:40.637 =========== 00:09:40.637 Arbitration Burst: no limit 00:09:40.637 00:09:40.637 Power Management 00:09:40.637 ================ 00:09:40.637 Number of Power States: 1 00:09:40.637 Current Power State: Power State #0 00:09:40.637 Power State #0: 00:09:40.637 Max Power: 25.00 W 00:09:40.637 Non-Operational State: Operational 00:09:40.637 Entry Latency: 16 microseconds 00:09:40.637 Exit Latency: 4 microseconds 00:09:40.637 Relative Read Throughput: 0 00:09:40.637 Relative Read Latency: 0 00:09:40.637 Relative Write Throughput: 0 00:09:40.637 Relative Write Latency: 0 00:09:40.637 Idle Power: Not Reported 00:09:40.637 Active Power: Not Reported 00:09:40.637 Non-Operational Permissive Mode: Not Supported 00:09:40.637 00:09:40.637 Health Information 00:09:40.637 ================== 00:09:40.637 Critical Warnings: 00:09:40.637 Available Spare Space: OK 00:09:40.637 Temperature: [2024-11-19 08:31:02.341736] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 75594 terminated unexpected 00:09:40.637 OK 00:09:40.637 Device Reliability: OK 00:09:40.637 Read Only: No 00:09:40.637 Volatile Memory Backup: OK 00:09:40.637 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.637 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.637 Available Spare: 0% 00:09:40.637 Available Spare Threshold: 0% 00:09:40.637 Life Percentage Used: 0% 00:09:40.637 Data Units Read: 1120 00:09:40.637 Data Units Written: 980 00:09:40.637 Host Read Commands: 50854 00:09:40.637 Host Write Commands: 49543 00:09:40.637 Controller Busy Time: 0 minutes 00:09:40.637 Power Cycles: 0 00:09:40.637 Power On Hours: 0 hours 00:09:40.637 Unsafe Shutdowns: 0 00:09:40.637 Unrecoverable Media Errors: 0 00:09:40.637 Lifetime Error Log Entries: 0 00:09:40.637 Warning Temperature Time: 0 minutes 00:09:40.637 Critical Temperature Time: 0 minutes 00:09:40.637 00:09:40.637 Number of Queues 00:09:40.637 ================ 00:09:40.637 Number of I/O Submission Queues: 64 00:09:40.637 Number of I/O Completion Queues: 64 00:09:40.637 00:09:40.637 ZNS Specific Controller Data 00:09:40.637 ============================ 00:09:40.637 Zone Append Size Limit: 0 00:09:40.637 00:09:40.637 00:09:40.637 Active Namespaces 00:09:40.637 ================= 00:09:40.637 Namespace ID:1 00:09:40.637 Error Recovery Timeout: Unlimited 00:09:40.637 Command Set Identifier: NVM (00h) 00:09:40.637 Deallocate: Supported 00:09:40.637 Deallocated/Unwritten Error: Supported 00:09:40.637 Deallocated Read Value: All 0x00 00:09:40.637 Deallocate in Write Zeroes: Not Supported 00:09:40.637 Deallocated Guard Field: 0xFFFF 00:09:40.638 Flush: Supported 00:09:40.638 Reservation: Not Supported 00:09:40.638 Namespace Sharing Capabilities: Private 00:09:40.638 Size (in LBAs): 1310720 (5GiB) 00:09:40.638 Capacity (in LBAs): 1310720 (5GiB) 00:09:40.638 Utilization (in LBAs): 1310720 (5GiB) 00:09:40.638 Thin Provisioning: Not Supported 00:09:40.638 Per-NS Atomic Units: No 00:09:40.638 Maximum Single Source Range Length: 128 00:09:40.638 Maximum Copy Length: 128 00:09:40.638 Maximum Source Range Count: 128 00:09:40.638 NGUID/EUI64 Never Reused: No 00:09:40.638 Namespace Write Protected: No 00:09:40.638 Number of LBA Formats: 8 00:09:40.638 Current LBA Format: LBA Format #04 00:09:40.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.638 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.638 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.638 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.638 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.638 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.638 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.638 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.638 00:09:40.638 NVM Specific Namespace Data 00:09:40.638 =========================== 00:09:40.638 Logical Block Storage Tag Mask: 0 00:09:40.638 Protection Information Capabilities: 00:09:40.638 16b Guard Protection Information Storage Tag Support: No 00:09:40.638 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.638 Storage Tag Check Read Support: No 00:09:40.638 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.638 ===================================================== 00:09:40.638 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:40.638 ===================================================== 00:09:40.638 Controller Capabilities/Features 00:09:40.638 ================================ 00:09:40.638 Vendor ID: 1b36 00:09:40.638 Subsystem Vendor ID: 1af4 00:09:40.638 Serial Number: 12343 00:09:40.638 Model Number: QEMU NVMe Ctrl 00:09:40.638 Firmware Version: 8.0.0 00:09:40.638 Recommended Arb Burst: 6 00:09:40.638 IEEE OUI Identifier: 00 54 52 00:09:40.638 Multi-path I/O 00:09:40.638 May have multiple subsystem ports: No 00:09:40.638 May have multiple controllers: Yes 00:09:40.638 Associated with SR-IOV VF: No 00:09:40.638 Max Data Transfer Size: 524288 00:09:40.638 Max Number of Namespaces: 256 00:09:40.638 Max Number of I/O Queues: 64 00:09:40.638 NVMe Specification Version (VS): 1.4 00:09:40.638 NVMe Specification Version (Identify): 1.4 00:09:40.638 Maximum Queue Entries: 2048 00:09:40.638 Contiguous Queues Required: Yes 00:09:40.638 Arbitration Mechanisms Supported 00:09:40.638 Weighted Round Robin: Not Supported 00:09:40.638 Vendor Specific: Not Supported 00:09:40.638 Reset Timeout: 7500 ms 00:09:40.638 Doorbell Stride: 4 bytes 00:09:40.638 NVM Subsystem Reset: Not Supported 00:09:40.638 Command Sets Supported 00:09:40.638 NVM Command Set: Supported 00:09:40.638 Boot Partition: Not Supported 00:09:40.638 Memory Page Size Minimum: 4096 bytes 00:09:40.638 Memory Page Size Maximum: 65536 bytes 00:09:40.638 Persistent Memory Region: Not Supported 00:09:40.638 Optional Asynchronous Events Supported 00:09:40.638 Namespace Attribute Notices: Supported 00:09:40.638 Firmware Activation Notices: Not Supported 00:09:40.638 ANA Change Notices: Not Supported 00:09:40.638 PLE Aggregate Log Change Notices: Not Supported 00:09:40.638 LBA Status Info Alert Notices: Not Supported 00:09:40.638 EGE Aggregate Log Change Notices: Not Supported 00:09:40.638 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.638 Zone Descriptor Change Notices: Not Supported 00:09:40.638 Discovery Log Change Notices: Not Supported 00:09:40.638 Controller Attributes 00:09:40.638 128-bit Host Identifier: Not Supported 00:09:40.638 Non-Operational Permissive Mode: Not Supported 00:09:40.638 NVM Sets: Not Supported 00:09:40.638 Read Recovery Levels: Not Supported 00:09:40.638 Endurance Groups: Supported 00:09:40.638 Predictable Latency Mode: Not Supported 00:09:40.638 Traffic Based Keep ALive: Not Supported 00:09:40.638 Namespace Granularity: Not Supported 00:09:40.638 SQ Associations: Not Supported 00:09:40.638 UUID List: Not Supported 00:09:40.638 Multi-Domain Subsystem: Not Supported 00:09:40.638 Fixed Capacity Management: Not Supported 00:09:40.638 Variable Capacity Management: Not Supported 00:09:40.638 Delete Endurance Group: Not Supported 00:09:40.638 Delete NVM Set: Not Supported 00:09:40.638 Extended LBA Formats Supported: Supported 00:09:40.638 Flexible Data Placement Supported: Supported 00:09:40.638 00:09:40.638 Controller Memory Buffer Support 00:09:40.638 ================================ 00:09:40.638 Supported: No 00:09:40.638 00:09:40.638 Persistent Memory Region Support 00:09:40.638 ================================ 00:09:40.639 Supported: No 00:09:40.639 00:09:40.639 Admin Command Set Attributes 00:09:40.639 ============================ 00:09:40.639 Security Send/Receive: Not Supported 00:09:40.639 Format NVM: Supported 00:09:40.639 Firmware Activate/Download: Not Supported 00:09:40.639 Namespace Management: Supported 00:09:40.639 Device Self-Test: Not Supported 00:09:40.639 Directives: Supported 00:09:40.639 NVMe-MI: Not Supported 00:09:40.639 Virtualization Management: Not Supported 00:09:40.639 Doorbell Buffer Config: Supported 00:09:40.639 Get LBA Status Capability: Not Supported 00:09:40.639 Command & Feature Lockdown Capability: Not Supported 00:09:40.639 Abort Command Limit: 4 00:09:40.639 Async Event Request Limit: 4 00:09:40.639 Number of Firmware Slots: N/A 00:09:40.639 Firmware Slot 1 Read-Only: N/A 00:09:40.639 Firmware Activation Without Reset: N/A 00:09:40.639 Multiple Update Detection Support: N/A 00:09:40.639 Firmware Update Granularity: No Information Provided 00:09:40.639 Per-Namespace SMART Log: Yes 00:09:40.639 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.639 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.639 Command Effects Log Page: Supported 00:09:40.639 Get Log Page Extended Data: Supported 00:09:40.639 Telemetry Log Pages: Not Supported 00:09:40.639 Persistent Event Log Pages: Not Supported 00:09:40.639 Supported Log Pages Log Page: May Support 00:09:40.639 Commands Supported & Effects Log Page: Not Supported 00:09:40.639 Feature Identifiers & Effects Log Page:May Support 00:09:40.639 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.639 Data Area 4 for Telemetry Log: Not Supported 00:09:40.639 Error Log Page Entries Supported: 1 00:09:40.639 Keep Alive: Not Supported 00:09:40.639 00:09:40.639 NVM Command Set Attributes 00:09:40.639 ========================== 00:09:40.639 Submission Queue Entry Size 00:09:40.639 Max: 64 00:09:40.639 Min: 64 00:09:40.639 Completion Queue Entry Size 00:09:40.639 Max: 16 00:09:40.639 Min: 16 00:09:40.639 Number of Namespaces: 256 00:09:40.639 Compare Command: Supported 00:09:40.639 Write Uncorrectable Command: Not Supported 00:09:40.639 Dataset Management Command: Supported 00:09:40.639 Write Zeroes Command: Supported 00:09:40.639 Set Features Save Field: Supported 00:09:40.639 Reservations: Not Supported 00:09:40.639 Timestamp: Supported 00:09:40.639 Copy: Supported 00:09:40.639 Volatile Write Cache: Present 00:09:40.639 Atomic Write Unit (Normal): 1 00:09:40.639 Atomic Write Unit (PFail): 1 00:09:40.639 Atomic Compare & Write Unit: 1 00:09:40.639 Fused Compare & Write: Not Supported 00:09:40.639 Scatter-Gather List 00:09:40.639 SGL Command Set: Supported 00:09:40.639 SGL Keyed: Not Supported 00:09:40.639 SGL Bit Bucket Descriptor: Not Supported 00:09:40.639 SGL Metadata Pointer: Not Supported 00:09:40.639 Oversized SGL: Not Supported 00:09:40.639 SGL Metadata Address: Not Supported 00:09:40.639 SGL Offset: Not Supported 00:09:40.639 Transport SGL Data Block: Not Supported 00:09:40.639 Replay Protected Memory Block: Not Supported 00:09:40.639 00:09:40.639 Firmware Slot Information 00:09:40.639 ========================= 00:09:40.639 Active slot: 1 00:09:40.639 Slot 1 Firmware Revision: 1.0 00:09:40.639 00:09:40.639 00:09:40.639 Commands Supported and Effects 00:09:40.639 ============================== 00:09:40.639 Admin Commands 00:09:40.639 -------------- 00:09:40.639 Delete I/O Submission Queue (00h): Supported 00:09:40.639 Create I/O Submission Queue (01h): Supported 00:09:40.639 Get Log Page (02h): Supported 00:09:40.639 Delete I/O Completion Queue (04h): Supported 00:09:40.639 Create I/O Completion Queue (05h): Supported 00:09:40.639 Identify (06h): Supported 00:09:40.639 Abort (08h): Supported 00:09:40.639 Set Features (09h): Supported 00:09:40.639 Get Features (0Ah): Supported 00:09:40.639 Asynchronous Event Request (0Ch): Supported 00:09:40.639 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.639 Directive Send (19h): Supported 00:09:40.639 Directive Receive (1Ah): Supported 00:09:40.639 Virtualization Management (1Ch): Supported 00:09:40.639 Doorbell Buffer Config (7Ch): Supported 00:09:40.639 Format NVM (80h): Supported LBA-Change 00:09:40.639 I/O Commands 00:09:40.639 ------------ 00:09:40.639 Flush (00h): Supported LBA-Change 00:09:40.639 Write (01h): Supported LBA-Change 00:09:40.639 Read (02h): Supported 00:09:40.639 Compare (05h): Supported 00:09:40.639 Write Zeroes (08h): Supported LBA-Change 00:09:40.639 Dataset Management (09h): Supported LBA-Change 00:09:40.639 Unknown (0Ch): Supported 00:09:40.639 Unknown (12h): Supported 00:09:40.639 Copy (19h): Supported LBA-Change 00:09:40.639 Unknown (1Dh): Supported LBA-Change 00:09:40.639 00:09:40.639 Error Log 00:09:40.639 ========= 00:09:40.639 00:09:40.639 Arbitration 00:09:40.639 =========== 00:09:40.639 Arbitration Burst: no limit 00:09:40.639 00:09:40.639 Power Management 00:09:40.639 ================ 00:09:40.639 Number of Power States: 1 00:09:40.639 Current Power State: Power State #0 00:09:40.639 Power State #0: 00:09:40.639 Max Power: 25.00 W 00:09:40.639 Non-Operational State: Operational 00:09:40.639 Entry Latency: 16 microseconds 00:09:40.639 Exit Latency: 4 microseconds 00:09:40.639 Relative Read Throughput: 0 00:09:40.639 Relative Read Latency: 0 00:09:40.639 Relative Write Throughput: 0 00:09:40.640 Relative Write Latency: 0 00:09:40.640 Idle Power: Not Reported 00:09:40.640 Active Power: Not Reported 00:09:40.640 Non-Operational Permissive Mode: Not Supported 00:09:40.640 00:09:40.640 Health Information 00:09:40.640 ================== 00:09:40.640 Critical Warnings: 00:09:40.640 Available Spare Space: OK 00:09:40.640 Temperature: OK 00:09:40.640 Device Reliability: OK 00:09:40.640 Read Only: No 00:09:40.640 Volatile Memory Backup: OK 00:09:40.640 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.640 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.640 Available Spare: 0% 00:09:40.640 Available Spare Threshold: 0% 00:09:40.640 Life Percentage Used: 0% 00:09:40.640 Data Units Read: 932 00:09:40.640 Data Units Written: 861 00:09:40.640 Host Read Commands: 35800 00:09:40.640 Host Write Commands: 35223 00:09:40.640 Controller Busy Time: 0 minutes 00:09:40.640 Power Cycles: 0 00:09:40.640 Power On Hours: 0 hours 00:09:40.640 Unsafe Shutdowns: 0 00:09:40.640 Unrecoverable Media Errors: 0 00:09:40.640 Lifetime Error Log Entries: 0 00:09:40.640 Warning Temperature Time: 0 minutes 00:09:40.640 Critical Temperature Time: 0 minutes 00:09:40.640 00:09:40.640 Number of Queues 00:09:40.640 ================ 00:09:40.640 Number of I/O Submission Queues: 64 00:09:40.640 Number of I/O Completion Queues: 64 00:09:40.640 00:09:40.640 ZNS Specific Controller Data 00:09:40.640 ============================ 00:09:40.640 Zone Append Size Limit: 0 00:09:40.640 00:09:40.640 00:09:40.640 Active Namespaces 00:09:40.640 ================= 00:09:40.640 Namespace ID:1 00:09:40.640 Error Recovery Timeout: Unlimited 00:09:40.640 Command Set Identifier: NVM (00h) 00:09:40.640 Deallocate: Supported 00:09:40.640 Deallocated/Unwritten Error: Supported 00:09:40.640 Deallocated Read Value: All 0x00 00:09:40.640 Deallocate in Write Zeroes: Not Supported 00:09:40.640 Deallocated Guard Field: 0xFFFF 00:09:40.640 Flush: Supported 00:09:40.640 Reservation: Not Supported 00:09:40.640 Namespace Sharing Capabilities: Multiple Controllers 00:09:40.640 Size (in LBAs): 262144 (1GiB) 00:09:40.640 Capacity (in LBAs): 262144 (1GiB) 00:09:40.640 Utilization (in LBAs): 262144 (1GiB) 00:09:40.640 Thin Provisioning: Not Supported 00:09:40.640 Per-NS Atomic Units: No 00:09:40.640 Maximum Single Source Range Length: 128 00:09:40.640 Maximum Copy Length: 128 00:09:40.640 Maximum Source Range Count: 128 00:09:40.640 NGUID/EUI64 Never Reused: No 00:09:40.640 Namespace Write Protected: No 00:09:40.640 Endurance group ID: 1 00:09:40.640 Number of LBA Formats: 8 00:09:40.640 Current LBA Format: LBA Format #04 00:09:40.640 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.640 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.640 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.640 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.640 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.640 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.640 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.640 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.640 00:09:40.640 Get Feature FDP: 00:09:40.640 ================ 00:09:40.640 Enabled: Yes 00:09:40.640 FDP configuration index: 0 00:09:40.640 00:09:40.640 FDP configurations log page 00:09:40.640 =========================== 00:09:40.640 Number of FDP configurations: 1 00:09:40.640 Version: 0 00:09:40.640 Size: 112 00:09:40.640 FDP Configuration Descriptor: 0 00:09:40.640 Descriptor Size: 96 00:09:40.640 Reclaim Group Identifier format: 2 00:09:40.640 FDP Volatile Write Cache: Not Present 00:09:40.640 FDP Configuration: Valid 00:09:40.640 Vendor Specific Size: 0 00:09:40.640 Number of Reclaim Groups: 2 00:09:40.640 Number of Recalim Unit Handles: 8 00:09:40.640 Max Placement Identifiers: 128 00:09:40.640 Number of Namespaces Suppprted: 256 00:09:40.640 Reclaim unit Nominal Size: 6000000 bytes 00:09:40.640 Estimated Reclaim Unit Time Limit: Not Reported 00:09:40.640 RUH Desc #000: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #001: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #002: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #003: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #004: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #005: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #006: RUH Type: Initially Isolated 00:09:40.640 RUH Desc #007: RUH Type: Initially Isolated 00:09:40.640 00:09:40.640 FDP reclaim unit handle usage log page 00:09:40.640 ====================================== 00:09:40.640 Number of Reclaim Unit Handles: 8 00:09:40.640 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:40.640 RUH Usage Desc #001: RUH Attributes: Unused 00:09:40.640 RUH Usage Desc #002: RUH Attributes: Unused 00:09:40.640 RUH Usage Desc #003: RUH Attributes: Unused 00:09:40.640 RUH Usage Desc #004: RUH Attributes: Unused 00:09:40.640 RUH Usage Desc #005: RUH Attributes: Unused 00:09:40.640 RUH Usage Desc #006: RUH Attributes: Unused 00:09:40.640 RUH Usage Desc #007: RUH Attributes: Unused 00:09:40.640 00:09:40.640 FDP statistics log page 00:09:40.640 ======================= 00:09:40.640 Host bytes with metadata written: 533241856 00:09:40.640 Medi[2024-11-19 08:31:02.342675] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 75594 terminated unexpected 00:09:40.640 a bytes with metadata written: 533299200 00:09:40.640 Media bytes erased: 0 00:09:40.640 00:09:40.640 FDP events log page 00:09:40.641 =================== 00:09:40.641 Number of FDP events: 0 00:09:40.641 00:09:40.641 NVM Specific Namespace Data 00:09:40.641 =========================== 00:09:40.641 Logical Block Storage Tag Mask: 0 00:09:40.641 Protection Information Capabilities: 00:09:40.641 16b Guard Protection Information Storage Tag Support: No 00:09:40.641 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.641 Storage Tag Check Read Support: No 00:09:40.641 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.641 ===================================================== 00:09:40.641 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:40.641 ===================================================== 00:09:40.641 Controller Capabilities/Features 00:09:40.641 ================================ 00:09:40.641 Vendor ID: 1b36 00:09:40.641 Subsystem Vendor ID: 1af4 00:09:40.641 Serial Number: 12342 00:09:40.641 Model Number: QEMU NVMe Ctrl 00:09:40.641 Firmware Version: 8.0.0 00:09:40.641 Recommended Arb Burst: 6 00:09:40.641 IEEE OUI Identifier: 00 54 52 00:09:40.641 Multi-path I/O 00:09:40.641 May have multiple subsystem ports: No 00:09:40.641 May have multiple controllers: No 00:09:40.641 Associated with SR-IOV VF: No 00:09:40.641 Max Data Transfer Size: 524288 00:09:40.641 Max Number of Namespaces: 256 00:09:40.641 Max Number of I/O Queues: 64 00:09:40.641 NVMe Specification Version (VS): 1.4 00:09:40.641 NVMe Specification Version (Identify): 1.4 00:09:40.641 Maximum Queue Entries: 2048 00:09:40.641 Contiguous Queues Required: Yes 00:09:40.641 Arbitration Mechanisms Supported 00:09:40.641 Weighted Round Robin: Not Supported 00:09:40.641 Vendor Specific: Not Supported 00:09:40.641 Reset Timeout: 7500 ms 00:09:40.641 Doorbell Stride: 4 bytes 00:09:40.641 NVM Subsystem Reset: Not Supported 00:09:40.641 Command Sets Supported 00:09:40.641 NVM Command Set: Supported 00:09:40.641 Boot Partition: Not Supported 00:09:40.641 Memory Page Size Minimum: 4096 bytes 00:09:40.641 Memory Page Size Maximum: 65536 bytes 00:09:40.641 Persistent Memory Region: Not Supported 00:09:40.641 Optional Asynchronous Events Supported 00:09:40.641 Namespace Attribute Notices: Supported 00:09:40.641 Firmware Activation Notices: Not Supported 00:09:40.641 ANA Change Notices: Not Supported 00:09:40.641 PLE Aggregate Log Change Notices: Not Supported 00:09:40.641 LBA Status Info Alert Notices: Not Supported 00:09:40.641 EGE Aggregate Log Change Notices: Not Supported 00:09:40.641 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.641 Zone Descriptor Change Notices: Not Supported 00:09:40.641 Discovery Log Change Notices: Not Supported 00:09:40.641 Controller Attributes 00:09:40.641 128-bit Host Identifier: Not Supported 00:09:40.641 Non-Operational Permissive Mode: Not Supported 00:09:40.641 NVM Sets: Not Supported 00:09:40.641 Read Recovery Levels: Not Supported 00:09:40.641 Endurance Groups: Not Supported 00:09:40.641 Predictable Latency Mode: Not Supported 00:09:40.641 Traffic Based Keep ALive: Not Supported 00:09:40.641 Namespace Granularity: Not Supported 00:09:40.641 SQ Associations: Not Supported 00:09:40.641 UUID List: Not Supported 00:09:40.641 Multi-Domain Subsystem: Not Supported 00:09:40.641 Fixed Capacity Management: Not Supported 00:09:40.641 Variable Capacity Management: Not Supported 00:09:40.642 Delete Endurance Group: Not Supported 00:09:40.642 Delete NVM Set: Not Supported 00:09:40.642 Extended LBA Formats Supported: Supported 00:09:40.642 Flexible Data Placement Supported: Not Supported 00:09:40.642 00:09:40.642 Controller Memory Buffer Support 00:09:40.642 ================================ 00:09:40.642 Supported: No 00:09:40.642 00:09:40.642 Persistent Memory Region Support 00:09:40.642 ================================ 00:09:40.642 Supported: No 00:09:40.642 00:09:40.642 Admin Command Set Attributes 00:09:40.642 ============================ 00:09:40.642 Security Send/Receive: Not Supported 00:09:40.642 Format NVM: Supported 00:09:40.642 Firmware Activate/Download: Not Supported 00:09:40.642 Namespace Management: Supported 00:09:40.642 Device Self-Test: Not Supported 00:09:40.642 Directives: Supported 00:09:40.642 NVMe-MI: Not Supported 00:09:40.642 Virtualization Management: Not Supported 00:09:40.642 Doorbell Buffer Config: Supported 00:09:40.642 Get LBA Status Capability: Not Supported 00:09:40.642 Command & Feature Lockdown Capability: Not Supported 00:09:40.642 Abort Command Limit: 4 00:09:40.642 Async Event Request Limit: 4 00:09:40.642 Number of Firmware Slots: N/A 00:09:40.642 Firmware Slot 1 Read-Only: N/A 00:09:40.642 Firmware Activation Without Reset: N/A 00:09:40.642 Multiple Update Detection Support: N/A 00:09:40.642 Firmware Update Granularity: No Information Provided 00:09:40.642 Per-Namespace SMART Log: Yes 00:09:40.642 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.642 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:40.642 Command Effects Log Page: Supported 00:09:40.642 Get Log Page Extended Data: Supported 00:09:40.642 Telemetry Log Pages: Not Supported 00:09:40.642 Persistent Event Log Pages: Not Supported 00:09:40.642 Supported Log Pages Log Page: May Support 00:09:40.642 Commands Supported & Effects Log Page: Not Supported 00:09:40.642 Feature Identifiers & Effects Log Page:May Support 00:09:40.642 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.642 Data Area 4 for Telemetry Log: Not Supported 00:09:40.642 Error Log Page Entries Supported: 1 00:09:40.642 Keep Alive: Not Supported 00:09:40.642 00:09:40.642 NVM Command Set Attributes 00:09:40.642 ========================== 00:09:40.642 Submission Queue Entry Size 00:09:40.642 Max: 64 00:09:40.642 Min: 64 00:09:40.642 Completion Queue Entry Size 00:09:40.642 Max: 16 00:09:40.642 Min: 16 00:09:40.642 Number of Namespaces: 256 00:09:40.642 Compare Command: Supported 00:09:40.642 Write Uncorrectable Command: Not Supported 00:09:40.642 Dataset Management Command: Supported 00:09:40.642 Write Zeroes Command: Supported 00:09:40.642 Set Features Save Field: Supported 00:09:40.642 Reservations: Not Supported 00:09:40.642 Timestamp: Supported 00:09:40.642 Copy: Supported 00:09:40.642 Volatile Write Cache: Present 00:09:40.642 Atomic Write Unit (Normal): 1 00:09:40.642 Atomic Write Unit (PFail): 1 00:09:40.642 Atomic Compare & Write Unit: 1 00:09:40.642 Fused Compare & Write: Not Supported 00:09:40.642 Scatter-Gather List 00:09:40.642 SGL Command Set: Supported 00:09:40.642 SGL Keyed: Not Supported 00:09:40.642 SGL Bit Bucket Descriptor: Not Supported 00:09:40.642 SGL Metadata Pointer: Not Supported 00:09:40.642 Oversized SGL: Not Supported 00:09:40.642 SGL Metadata Address: Not Supported 00:09:40.642 SGL Offset: Not Supported 00:09:40.642 Transport SGL Data Block: Not Supported 00:09:40.642 Replay Protected Memory Block: Not Supported 00:09:40.642 00:09:40.642 Firmware Slot Information 00:09:40.642 ========================= 00:09:40.642 Active slot: 1 00:09:40.642 Slot 1 Firmware Revision: 1.0 00:09:40.642 00:09:40.642 00:09:40.642 Commands Supported and Effects 00:09:40.642 ============================== 00:09:40.642 Admin Commands 00:09:40.642 -------------- 00:09:40.642 Delete I/O Submission Queue (00h): Supported 00:09:40.642 Create I/O Submission Queue (01h): Supported 00:09:40.642 Get Log Page (02h): Supported 00:09:40.642 Delete I/O Completion Queue (04h): Supported 00:09:40.642 Create I/O Completion Queue (05h): Supported 00:09:40.642 Identify (06h): Supported 00:09:40.642 Abort (08h): Supported 00:09:40.642 Set Features (09h): Supported 00:09:40.642 Get Features (0Ah): Supported 00:09:40.642 Asynchronous Event Request (0Ch): Supported 00:09:40.642 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.642 Directive Send (19h): Supported 00:09:40.642 Directive Receive (1Ah): Supported 00:09:40.642 Virtualization Management (1Ch): Supported 00:09:40.642 Doorbell Buffer Config (7Ch): Supported 00:09:40.642 Format NVM (80h): Supported LBA-Change 00:09:40.642 I/O Commands 00:09:40.642 ------------ 00:09:40.642 Flush (00h): Supported LBA-Change 00:09:40.642 Write (01h): Supported LBA-Change 00:09:40.642 Read (02h): Supported 00:09:40.642 Compare (05h): Supported 00:09:40.642 Write Zeroes (08h): Supported LBA-Change 00:09:40.642 Dataset Management (09h): Supported LBA-Change 00:09:40.642 Unknown (0Ch): Supported 00:09:40.642 Unknown (12h): Supported 00:09:40.642 Copy (19h): Supported LBA-Change 00:09:40.642 Unknown (1Dh): Supported LBA-Change 00:09:40.642 00:09:40.642 Error Log 00:09:40.642 ========= 00:09:40.642 00:09:40.642 Arbitration 00:09:40.642 =========== 00:09:40.642 Arbitration Burst: no limit 00:09:40.642 00:09:40.642 Power Management 00:09:40.642 ================ 00:09:40.642 Number of Power States: 1 00:09:40.642 Current Power State: Power State #0 00:09:40.642 Power State #0: 00:09:40.642 Max Power: 25.00 W 00:09:40.643 Non-Operational State: Operational 00:09:40.643 Entry Latency: 16 microseconds 00:09:40.643 Exit Latency: 4 microseconds 00:09:40.643 Relative Read Throughput: 0 00:09:40.643 Relative Read Latency: 0 00:09:40.643 Relative Write Throughput: 0 00:09:40.643 Relative Write Latency: 0 00:09:40.643 Idle Power: Not Reported 00:09:40.643 Active Power: Not Reported 00:09:40.643 Non-Operational Permissive Mode: Not Supported 00:09:40.643 00:09:40.643 Health Information 00:09:40.643 ================== 00:09:40.643 Critical Warnings: 00:09:40.643 Available Spare Space: OK 00:09:40.643 Temperature: OK 00:09:40.643 Device Reliability: OK 00:09:40.643 Read Only: No 00:09:40.643 Volatile Memory Backup: OK 00:09:40.643 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.643 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.643 Available Spare: 0% 00:09:40.643 Available Spare Threshold: 0% 00:09:40.643 Life Percentage Used: 0% 00:09:40.643 Data Units Read: 2366 00:09:40.643 Data Units Written: 2153 00:09:40.643 Host Read Commands: 103926 00:09:40.643 Host Write Commands: 102196 00:09:40.643 Controller Busy Time: 0 minutes 00:09:40.643 Power Cycles: 0 00:09:40.643 Power On Hours: 0 hours 00:09:40.643 Unsafe Shutdowns: 0 00:09:40.643 Unrecoverable Media Errors: 0 00:09:40.643 Lifetime Error Log Entries: 0 00:09:40.643 Warning Temperature Time: 0 minutes 00:09:40.643 Critical Temperature Time: 0 minutes 00:09:40.643 00:09:40.643 Number of Queues 00:09:40.643 ================ 00:09:40.643 Number of I/O Submission Queues: 64 00:09:40.643 Number of I/O Completion Queues: 64 00:09:40.643 00:09:40.643 ZNS Specific Controller Data 00:09:40.643 ============================ 00:09:40.643 Zone Append Size Limit: 0 00:09:40.643 00:09:40.643 00:09:40.643 Active Namespaces 00:09:40.643 ================= 00:09:40.643 Namespace ID:1 00:09:40.643 Error Recovery Timeout: Unlimited 00:09:40.643 Command Set Identifier: NVM (00h) 00:09:40.643 Deallocate: Supported 00:09:40.643 Deallocated/Unwritten Error: Supported 00:09:40.643 Deallocated Read Value: All 0x00 00:09:40.643 Deallocate in Write Zeroes: Not Supported 00:09:40.643 Deallocated Guard Field: 0xFFFF 00:09:40.643 Flush: Supported 00:09:40.643 Reservation: Not Supported 00:09:40.643 Namespace Sharing Capabilities: Private 00:09:40.643 Size (in LBAs): 1048576 (4GiB) 00:09:40.643 Capacity (in LBAs): 1048576 (4GiB) 00:09:40.643 Utilization (in LBAs): 1048576 (4GiB) 00:09:40.643 Thin Provisioning: Not Supported 00:09:40.643 Per-NS Atomic Units: No 00:09:40.643 Maximum Single Source Range Length: 128 00:09:40.643 Maximum Copy Length: 128 00:09:40.643 Maximum Source Range Count: 128 00:09:40.643 NGUID/EUI64 Never Reused: No 00:09:40.643 Namespace Write Protected: No 00:09:40.643 Number of LBA Formats: 8 00:09:40.643 Current LBA Format: LBA Format #04 00:09:40.643 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.643 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.643 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.643 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.643 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.643 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.643 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.643 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.643 00:09:40.643 NVM Specific Namespace Data 00:09:40.643 =========================== 00:09:40.643 Logical Block Storage Tag Mask: 0 00:09:40.643 Protection Information Capabilities: 00:09:40.643 16b Guard Protection Information Storage Tag Support: No 00:09:40.643 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.643 Storage Tag Check Read Support: No 00:09:40.643 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.643 Namespace ID:2 00:09:40.643 Error Recovery Timeout: Unlimited 00:09:40.643 Command Set Identifier: NVM (00h) 00:09:40.643 Deallocate: Supported 00:09:40.643 Deallocated/Unwritten Error: Supported 00:09:40.643 Deallocated Read Value: All 0x00 00:09:40.643 Deallocate in Write Zeroes: Not Supported 00:09:40.643 Deallocated Guard Field: 0xFFFF 00:09:40.643 Flush: Supported 00:09:40.643 Reservation: Not Supported 00:09:40.643 Namespace Sharing Capabilities: Private 00:09:40.643 Size (in LBAs): 1048576 (4GiB) 00:09:40.643 Capacity (in LBAs): 1048576 (4GiB) 00:09:40.643 Utilization (in LBAs): 1048576 (4GiB) 00:09:40.643 Thin Provisioning: Not Supported 00:09:40.643 Per-NS Atomic Units: No 00:09:40.643 Maximum Single Source Range Length: 128 00:09:40.643 Maximum Copy Length: 128 00:09:40.643 Maximum Source Range Count: 128 00:09:40.643 NGUID/EUI64 Never Reused: No 00:09:40.643 Namespace Write Protected: No 00:09:40.643 Number of LBA Formats: 8 00:09:40.643 Current LBA Format: LBA Format #04 00:09:40.643 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.643 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.643 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.643 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.643 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.643 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.644 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.644 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.644 00:09:40.644 NVM Specific Namespace Data 00:09:40.644 =========================== 00:09:40.644 Logical Block Storage Tag Mask: 0 00:09:40.644 Protection Information Capabilities: 00:09:40.644 16b Guard Protection Information Storage Tag Support: No 00:09:40.644 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.644 Storage Tag Check Read Support: No 00:09:40.644 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Namespace ID:3 00:09:40.644 Error Recovery Timeout: Unlimited 00:09:40.644 Command Set Identifier: NVM (00h) 00:09:40.644 Deallocate: Supported 00:09:40.644 Deallocated/Unwritten Error: Supported 00:09:40.644 Deallocated Read Value: All 0x00 00:09:40.644 Deallocate in Write Zeroes: Not Supported 00:09:40.644 Deallocated Guard Field: 0xFFFF 00:09:40.644 Flush: Supported 00:09:40.644 Reservation: Not Supported 00:09:40.644 Namespace Sharing Capabilities: Private 00:09:40.644 Size (in LBAs): 1048576 (4GiB) 00:09:40.644 Capacity (in LBAs): 1048576 (4GiB) 00:09:40.644 Utilization (in LBAs): 1048576 (4GiB) 00:09:40.644 Thin Provisioning: Not Supported 00:09:40.644 Per-NS Atomic Units: No 00:09:40.644 Maximum Single Source Range Length: 128 00:09:40.644 Maximum Copy Length: 128 00:09:40.644 Maximum Source Range Count: 128 00:09:40.644 NGUID/EUI64 Never Reused: No 00:09:40.644 Namespace Write Protected: No 00:09:40.644 Number of LBA Formats: 8 00:09:40.644 Current LBA Format: LBA Format #04 00:09:40.644 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.644 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.644 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.644 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.644 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.644 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.644 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.644 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.644 00:09:40.644 NVM Specific Namespace Data 00:09:40.644 =========================== 00:09:40.644 Logical Block Storage Tag Mask: 0 00:09:40.644 Protection Information Capabilities: 00:09:40.644 16b Guard Protection Information Storage Tag Support: No 00:09:40.644 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.644 Storage Tag Check Read Support: No 00:09:40.644 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.644 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:40.644 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:40.905 ===================================================== 00:09:40.905 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:40.905 ===================================================== 00:09:40.905 Controller Capabilities/Features 00:09:40.905 ================================ 00:09:40.905 Vendor ID: 1b36 00:09:40.905 Subsystem Vendor ID: 1af4 00:09:40.905 Serial Number: 12340 00:09:40.905 Model Number: QEMU NVMe Ctrl 00:09:40.905 Firmware Version: 8.0.0 00:09:40.905 Recommended Arb Burst: 6 00:09:40.905 IEEE OUI Identifier: 00 54 52 00:09:40.905 Multi-path I/O 00:09:40.905 May have multiple subsystem ports: No 00:09:40.905 May have multiple controllers: No 00:09:40.905 Associated with SR-IOV VF: No 00:09:40.905 Max Data Transfer Size: 524288 00:09:40.905 Max Number of Namespaces: 256 00:09:40.905 Max Number of I/O Queues: 64 00:09:40.905 NVMe Specification Version (VS): 1.4 00:09:40.905 NVMe Specification Version (Identify): 1.4 00:09:40.905 Maximum Queue Entries: 2048 00:09:40.905 Contiguous Queues Required: Yes 00:09:40.905 Arbitration Mechanisms Supported 00:09:40.905 Weighted Round Robin: Not Supported 00:09:40.905 Vendor Specific: Not Supported 00:09:40.905 Reset Timeout: 7500 ms 00:09:40.905 Doorbell Stride: 4 bytes 00:09:40.905 NVM Subsystem Reset: Not Supported 00:09:40.905 Command Sets Supported 00:09:40.905 NVM Command Set: Supported 00:09:40.905 Boot Partition: Not Supported 00:09:40.905 Memory Page Size Minimum: 4096 bytes 00:09:40.905 Memory Page Size Maximum: 65536 bytes 00:09:40.905 Persistent Memory Region: Not Supported 00:09:40.905 Optional Asynchronous Events Supported 00:09:40.905 Namespace Attribute Notices: Supported 00:09:40.905 Firmware Activation Notices: Not Supported 00:09:40.905 ANA Change Notices: Not Supported 00:09:40.905 PLE Aggregate Log Change Notices: Not Supported 00:09:40.905 LBA Status Info Alert Notices: Not Supported 00:09:40.905 EGE Aggregate Log Change Notices: Not Supported 00:09:40.905 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.905 Zone Descriptor Change Notices: Not Supported 00:09:40.905 Discovery Log Change Notices: Not Supported 00:09:40.905 Controller Attributes 00:09:40.905 128-bit Host Identifier: Not Supported 00:09:40.905 Non-Operational Permissive Mode: Not Supported 00:09:40.905 NVM Sets: Not Supported 00:09:40.905 Read Recovery Levels: Not Supported 00:09:40.905 Endurance Groups: Not Supported 00:09:40.905 Predictable Latency Mode: Not Supported 00:09:40.905 Traffic Based Keep ALive: Not Supported 00:09:40.905 Namespace Granularity: Not Supported 00:09:40.905 SQ Associations: Not Supported 00:09:40.905 UUID List: Not Supported 00:09:40.905 Multi-Domain Subsystem: Not Supported 00:09:40.905 Fixed Capacity Management: Not Supported 00:09:40.905 Variable Capacity Management: Not Supported 00:09:40.905 Delete Endurance Group: Not Supported 00:09:40.905 Delete NVM Set: Not Supported 00:09:40.905 Extended LBA Formats Supported: Supported 00:09:40.905 Flexible Data Placement Supported: Not Supported 00:09:40.905 00:09:40.905 Controller Memory Buffer Support 00:09:40.905 ================================ 00:09:40.905 Supported: No 00:09:40.905 00:09:40.905 Persistent Memory Region Support 00:09:40.905 ================================ 00:09:40.905 Supported: No 00:09:40.905 00:09:40.905 Admin Command Set Attributes 00:09:40.905 ============================ 00:09:40.905 Security Send/Receive: Not Supported 00:09:40.905 Format NVM: Supported 00:09:40.905 Firmware Activate/Download: Not Supported 00:09:40.905 Namespace Management: Supported 00:09:40.905 Device Self-Test: Not Supported 00:09:40.905 Directives: Supported 00:09:40.905 NVMe-MI: Not Supported 00:09:40.905 Virtualization Management: Not Supported 00:09:40.905 Doorbell Buffer Config: Supported 00:09:40.905 Get LBA Status Capability: Not Supported 00:09:40.905 Command & Feature Lockdown Capability: Not Supported 00:09:40.905 Abort Command Limit: 4 00:09:40.905 Async Event Request Limit: 4 00:09:40.905 Number of Firmware Slots: N/A 00:09:40.905 Firmware Slot 1 Read-Only: N/A 00:09:40.905 Firmware Activation Without Reset: N/A 00:09:40.905 Multiple Update Detection Support: N/A 00:09:40.905 Firmware Update Granularity: No Information Provided 00:09:40.905 Per-Namespace SMART Log: Yes 00:09:40.905 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.905 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:40.905 Command Effects Log Page: Supported 00:09:40.905 Get Log Page Extended Data: Supported 00:09:40.905 Telemetry Log Pages: Not Supported 00:09:40.905 Persistent Event Log Pages: Not Supported 00:09:40.905 Supported Log Pages Log Page: May Support 00:09:40.905 Commands Supported & Effects Log Page: Not Supported 00:09:40.906 Feature Identifiers & Effects Log Page:May Support 00:09:40.906 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.906 Data Area 4 for Telemetry Log: Not Supported 00:09:40.906 Error Log Page Entries Supported: 1 00:09:40.906 Keep Alive: Not Supported 00:09:40.906 00:09:40.906 NVM Command Set Attributes 00:09:40.906 ========================== 00:09:40.906 Submission Queue Entry Size 00:09:40.906 Max: 64 00:09:40.906 Min: 64 00:09:40.906 Completion Queue Entry Size 00:09:40.906 Max: 16 00:09:40.906 Min: 16 00:09:40.906 Number of Namespaces: 256 00:09:40.906 Compare Command: Supported 00:09:40.906 Write Uncorrectable Command: Not Supported 00:09:40.906 Dataset Management Command: Supported 00:09:40.906 Write Zeroes Command: Supported 00:09:40.906 Set Features Save Field: Supported 00:09:40.906 Reservations: Not Supported 00:09:40.906 Timestamp: Supported 00:09:40.906 Copy: Supported 00:09:40.906 Volatile Write Cache: Present 00:09:40.906 Atomic Write Unit (Normal): 1 00:09:40.906 Atomic Write Unit (PFail): 1 00:09:40.906 Atomic Compare & Write Unit: 1 00:09:40.906 Fused Compare & Write: Not Supported 00:09:40.906 Scatter-Gather List 00:09:40.906 SGL Command Set: Supported 00:09:40.906 SGL Keyed: Not Supported 00:09:40.906 SGL Bit Bucket Descriptor: Not Supported 00:09:40.906 SGL Metadata Pointer: Not Supported 00:09:40.906 Oversized SGL: Not Supported 00:09:40.906 SGL Metadata Address: Not Supported 00:09:40.906 SGL Offset: Not Supported 00:09:40.906 Transport SGL Data Block: Not Supported 00:09:40.906 Replay Protected Memory Block: Not Supported 00:09:40.906 00:09:40.906 Firmware Slot Information 00:09:40.906 ========================= 00:09:40.906 Active slot: 1 00:09:40.906 Slot 1 Firmware Revision: 1.0 00:09:40.906 00:09:40.906 00:09:40.906 Commands Supported and Effects 00:09:40.906 ============================== 00:09:40.906 Admin Commands 00:09:40.906 -------------- 00:09:40.906 Delete I/O Submission Queue (00h): Supported 00:09:40.906 Create I/O Submission Queue (01h): Supported 00:09:40.906 Get Log Page (02h): Supported 00:09:40.906 Delete I/O Completion Queue (04h): Supported 00:09:40.906 Create I/O Completion Queue (05h): Supported 00:09:40.906 Identify (06h): Supported 00:09:40.906 Abort (08h): Supported 00:09:40.906 Set Features (09h): Supported 00:09:40.906 Get Features (0Ah): Supported 00:09:40.906 Asynchronous Event Request (0Ch): Supported 00:09:40.906 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.906 Directive Send (19h): Supported 00:09:40.906 Directive Receive (1Ah): Supported 00:09:40.906 Virtualization Management (1Ch): Supported 00:09:40.906 Doorbell Buffer Config (7Ch): Supported 00:09:40.906 Format NVM (80h): Supported LBA-Change 00:09:40.906 I/O Commands 00:09:40.906 ------------ 00:09:40.906 Flush (00h): Supported LBA-Change 00:09:40.906 Write (01h): Supported LBA-Change 00:09:40.906 Read (02h): Supported 00:09:40.906 Compare (05h): Supported 00:09:40.906 Write Zeroes (08h): Supported LBA-Change 00:09:40.906 Dataset Management (09h): Supported LBA-Change 00:09:40.906 Unknown (0Ch): Supported 00:09:40.906 Unknown (12h): Supported 00:09:40.906 Copy (19h): Supported LBA-Change 00:09:40.906 Unknown (1Dh): Supported LBA-Change 00:09:40.906 00:09:40.906 Error Log 00:09:40.906 ========= 00:09:40.906 00:09:40.906 Arbitration 00:09:40.906 =========== 00:09:40.906 Arbitration Burst: no limit 00:09:40.906 00:09:40.906 Power Management 00:09:40.906 ================ 00:09:40.906 Number of Power States: 1 00:09:40.906 Current Power State: Power State #0 00:09:40.906 Power State #0: 00:09:40.906 Max Power: 25.00 W 00:09:40.906 Non-Operational State: Operational 00:09:40.906 Entry Latency: 16 microseconds 00:09:40.906 Exit Latency: 4 microseconds 00:09:40.906 Relative Read Throughput: 0 00:09:40.906 Relative Read Latency: 0 00:09:40.906 Relative Write Throughput: 0 00:09:40.906 Relative Write Latency: 0 00:09:40.906 Idle Power: Not Reported 00:09:40.906 Active Power: Not Reported 00:09:40.906 Non-Operational Permissive Mode: Not Supported 00:09:40.906 00:09:40.906 Health Information 00:09:40.906 ================== 00:09:40.906 Critical Warnings: 00:09:40.906 Available Spare Space: OK 00:09:40.906 Temperature: OK 00:09:40.906 Device Reliability: OK 00:09:40.906 Read Only: No 00:09:40.906 Volatile Memory Backup: OK 00:09:40.906 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.906 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.906 Available Spare: 0% 00:09:40.906 Available Spare Threshold: 0% 00:09:40.906 Life Percentage Used: 0% 00:09:40.906 Data Units Read: 731 00:09:40.906 Data Units Written: 659 00:09:40.906 Host Read Commands: 33868 00:09:40.906 Host Write Commands: 33654 00:09:40.906 Controller Busy Time: 0 minutes 00:09:40.906 Power Cycles: 0 00:09:40.906 Power On Hours: 0 hours 00:09:40.906 Unsafe Shutdowns: 0 00:09:40.906 Unrecoverable Media Errors: 0 00:09:40.906 Lifetime Error Log Entries: 0 00:09:40.906 Warning Temperature Time: 0 minutes 00:09:40.906 Critical Temperature Time: 0 minutes 00:09:40.906 00:09:40.906 Number of Queues 00:09:40.906 ================ 00:09:40.906 Number of I/O Submission Queues: 64 00:09:40.906 Number of I/O Completion Queues: 64 00:09:40.906 00:09:40.906 ZNS Specific Controller Data 00:09:40.906 ============================ 00:09:40.906 Zone Append Size Limit: 0 00:09:40.906 00:09:40.906 00:09:40.906 Active Namespaces 00:09:40.906 ================= 00:09:40.906 Namespace ID:1 00:09:40.906 Error Recovery Timeout: Unlimited 00:09:40.906 Command Set Identifier: NVM (00h) 00:09:40.906 Deallocate: Supported 00:09:40.906 Deallocated/Unwritten Error: Supported 00:09:40.906 Deallocated Read Value: All 0x00 00:09:40.906 Deallocate in Write Zeroes: Not Supported 00:09:40.906 Deallocated Guard Field: 0xFFFF 00:09:40.906 Flush: Supported 00:09:40.906 Reservation: Not Supported 00:09:40.906 Metadata Transferred as: Separate Metadata Buffer 00:09:40.906 Namespace Sharing Capabilities: Private 00:09:40.906 Size (in LBAs): 1548666 (5GiB) 00:09:40.906 Capacity (in LBAs): 1548666 (5GiB) 00:09:40.906 Utilization (in LBAs): 1548666 (5GiB) 00:09:40.906 Thin Provisioning: Not Supported 00:09:40.906 Per-NS Atomic Units: No 00:09:40.906 Maximum Single Source Range Length: 128 00:09:40.906 Maximum Copy Length: 128 00:09:40.906 Maximum Source Range Count: 128 00:09:40.906 NGUID/EUI64 Never Reused: No 00:09:40.906 Namespace Write Protected: No 00:09:40.906 Number of LBA Formats: 8 00:09:40.906 Current LBA Format: LBA Format #07 00:09:40.906 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.906 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.906 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.906 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.906 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.906 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.906 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.906 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.906 00:09:40.906 NVM Specific Namespace Data 00:09:40.906 =========================== 00:09:40.906 Logical Block Storage Tag Mask: 0 00:09:40.906 Protection Information Capabilities: 00:09:40.906 16b Guard Protection Information Storage Tag Support: No 00:09:40.906 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.906 Storage Tag Check Read Support: No 00:09:40.906 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.906 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:40.906 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:41.166 ===================================================== 00:09:41.166 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:41.166 ===================================================== 00:09:41.166 Controller Capabilities/Features 00:09:41.166 ================================ 00:09:41.166 Vendor ID: 1b36 00:09:41.166 Subsystem Vendor ID: 1af4 00:09:41.166 Serial Number: 12341 00:09:41.166 Model Number: QEMU NVMe Ctrl 00:09:41.166 Firmware Version: 8.0.0 00:09:41.166 Recommended Arb Burst: 6 00:09:41.166 IEEE OUI Identifier: 00 54 52 00:09:41.166 Multi-path I/O 00:09:41.166 May have multiple subsystem ports: No 00:09:41.166 May have multiple controllers: No 00:09:41.166 Associated with SR-IOV VF: No 00:09:41.166 Max Data Transfer Size: 524288 00:09:41.166 Max Number of Namespaces: 256 00:09:41.166 Max Number of I/O Queues: 64 00:09:41.166 NVMe Specification Version (VS): 1.4 00:09:41.166 NVMe Specification Version (Identify): 1.4 00:09:41.166 Maximum Queue Entries: 2048 00:09:41.166 Contiguous Queues Required: Yes 00:09:41.166 Arbitration Mechanisms Supported 00:09:41.166 Weighted Round Robin: Not Supported 00:09:41.166 Vendor Specific: Not Supported 00:09:41.166 Reset Timeout: 7500 ms 00:09:41.166 Doorbell Stride: 4 bytes 00:09:41.166 NVM Subsystem Reset: Not Supported 00:09:41.166 Command Sets Supported 00:09:41.166 NVM Command Set: Supported 00:09:41.166 Boot Partition: Not Supported 00:09:41.166 Memory Page Size Minimum: 4096 bytes 00:09:41.166 Memory Page Size Maximum: 65536 bytes 00:09:41.166 Persistent Memory Region: Not Supported 00:09:41.166 Optional Asynchronous Events Supported 00:09:41.166 Namespace Attribute Notices: Supported 00:09:41.166 Firmware Activation Notices: Not Supported 00:09:41.166 ANA Change Notices: Not Supported 00:09:41.166 PLE Aggregate Log Change Notices: Not Supported 00:09:41.166 LBA Status Info Alert Notices: Not Supported 00:09:41.166 EGE Aggregate Log Change Notices: Not Supported 00:09:41.166 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.166 Zone Descriptor Change Notices: Not Supported 00:09:41.166 Discovery Log Change Notices: Not Supported 00:09:41.166 Controller Attributes 00:09:41.166 128-bit Host Identifier: Not Supported 00:09:41.166 Non-Operational Permissive Mode: Not Supported 00:09:41.166 NVM Sets: Not Supported 00:09:41.166 Read Recovery Levels: Not Supported 00:09:41.166 Endurance Groups: Not Supported 00:09:41.166 Predictable Latency Mode: Not Supported 00:09:41.166 Traffic Based Keep ALive: Not Supported 00:09:41.166 Namespace Granularity: Not Supported 00:09:41.166 SQ Associations: Not Supported 00:09:41.166 UUID List: Not Supported 00:09:41.166 Multi-Domain Subsystem: Not Supported 00:09:41.166 Fixed Capacity Management: Not Supported 00:09:41.166 Variable Capacity Management: Not Supported 00:09:41.166 Delete Endurance Group: Not Supported 00:09:41.166 Delete NVM Set: Not Supported 00:09:41.166 Extended LBA Formats Supported: Supported 00:09:41.166 Flexible Data Placement Supported: Not Supported 00:09:41.166 00:09:41.166 Controller Memory Buffer Support 00:09:41.166 ================================ 00:09:41.166 Supported: No 00:09:41.166 00:09:41.166 Persistent Memory Region Support 00:09:41.166 ================================ 00:09:41.166 Supported: No 00:09:41.166 00:09:41.166 Admin Command Set Attributes 00:09:41.166 ============================ 00:09:41.166 Security Send/Receive: Not Supported 00:09:41.166 Format NVM: Supported 00:09:41.166 Firmware Activate/Download: Not Supported 00:09:41.166 Namespace Management: Supported 00:09:41.166 Device Self-Test: Not Supported 00:09:41.166 Directives: Supported 00:09:41.166 NVMe-MI: Not Supported 00:09:41.166 Virtualization Management: Not Supported 00:09:41.166 Doorbell Buffer Config: Supported 00:09:41.166 Get LBA Status Capability: Not Supported 00:09:41.166 Command & Feature Lockdown Capability: Not Supported 00:09:41.166 Abort Command Limit: 4 00:09:41.166 Async Event Request Limit: 4 00:09:41.166 Number of Firmware Slots: N/A 00:09:41.166 Firmware Slot 1 Read-Only: N/A 00:09:41.166 Firmware Activation Without Reset: N/A 00:09:41.166 Multiple Update Detection Support: N/A 00:09:41.166 Firmware Update Granularity: No Information Provided 00:09:41.166 Per-Namespace SMART Log: Yes 00:09:41.166 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.166 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:41.166 Command Effects Log Page: Supported 00:09:41.166 Get Log Page Extended Data: Supported 00:09:41.167 Telemetry Log Pages: Not Supported 00:09:41.167 Persistent Event Log Pages: Not Supported 00:09:41.167 Supported Log Pages Log Page: May Support 00:09:41.167 Commands Supported & Effects Log Page: Not Supported 00:09:41.167 Feature Identifiers & Effects Log Page:May Support 00:09:41.167 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.167 Data Area 4 for Telemetry Log: Not Supported 00:09:41.167 Error Log Page Entries Supported: 1 00:09:41.167 Keep Alive: Not Supported 00:09:41.167 00:09:41.167 NVM Command Set Attributes 00:09:41.167 ========================== 00:09:41.167 Submission Queue Entry Size 00:09:41.167 Max: 64 00:09:41.167 Min: 64 00:09:41.167 Completion Queue Entry Size 00:09:41.167 Max: 16 00:09:41.167 Min: 16 00:09:41.167 Number of Namespaces: 256 00:09:41.167 Compare Command: Supported 00:09:41.167 Write Uncorrectable Command: Not Supported 00:09:41.167 Dataset Management Command: Supported 00:09:41.167 Write Zeroes Command: Supported 00:09:41.167 Set Features Save Field: Supported 00:09:41.167 Reservations: Not Supported 00:09:41.167 Timestamp: Supported 00:09:41.167 Copy: Supported 00:09:41.167 Volatile Write Cache: Present 00:09:41.167 Atomic Write Unit (Normal): 1 00:09:41.167 Atomic Write Unit (PFail): 1 00:09:41.167 Atomic Compare & Write Unit: 1 00:09:41.167 Fused Compare & Write: Not Supported 00:09:41.167 Scatter-Gather List 00:09:41.167 SGL Command Set: Supported 00:09:41.167 SGL Keyed: Not Supported 00:09:41.167 SGL Bit Bucket Descriptor: Not Supported 00:09:41.167 SGL Metadata Pointer: Not Supported 00:09:41.167 Oversized SGL: Not Supported 00:09:41.167 SGL Metadata Address: Not Supported 00:09:41.167 SGL Offset: Not Supported 00:09:41.167 Transport SGL Data Block: Not Supported 00:09:41.167 Replay Protected Memory Block: Not Supported 00:09:41.167 00:09:41.167 Firmware Slot Information 00:09:41.167 ========================= 00:09:41.167 Active slot: 1 00:09:41.167 Slot 1 Firmware Revision: 1.0 00:09:41.167 00:09:41.167 00:09:41.167 Commands Supported and Effects 00:09:41.167 ============================== 00:09:41.167 Admin Commands 00:09:41.167 -------------- 00:09:41.167 Delete I/O Submission Queue (00h): Supported 00:09:41.167 Create I/O Submission Queue (01h): Supported 00:09:41.167 Get Log Page (02h): Supported 00:09:41.167 Delete I/O Completion Queue (04h): Supported 00:09:41.167 Create I/O Completion Queue (05h): Supported 00:09:41.167 Identify (06h): Supported 00:09:41.167 Abort (08h): Supported 00:09:41.167 Set Features (09h): Supported 00:09:41.167 Get Features (0Ah): Supported 00:09:41.167 Asynchronous Event Request (0Ch): Supported 00:09:41.167 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.167 Directive Send (19h): Supported 00:09:41.167 Directive Receive (1Ah): Supported 00:09:41.167 Virtualization Management (1Ch): Supported 00:09:41.167 Doorbell Buffer Config (7Ch): Supported 00:09:41.167 Format NVM (80h): Supported LBA-Change 00:09:41.167 I/O Commands 00:09:41.167 ------------ 00:09:41.167 Flush (00h): Supported LBA-Change 00:09:41.167 Write (01h): Supported LBA-Change 00:09:41.167 Read (02h): Supported 00:09:41.167 Compare (05h): Supported 00:09:41.167 Write Zeroes (08h): Supported LBA-Change 00:09:41.167 Dataset Management (09h): Supported LBA-Change 00:09:41.167 Unknown (0Ch): Supported 00:09:41.167 Unknown (12h): Supported 00:09:41.167 Copy (19h): Supported LBA-Change 00:09:41.167 Unknown (1Dh): Supported LBA-Change 00:09:41.167 00:09:41.167 Error Log 00:09:41.167 ========= 00:09:41.167 00:09:41.167 Arbitration 00:09:41.167 =========== 00:09:41.167 Arbitration Burst: no limit 00:09:41.167 00:09:41.167 Power Management 00:09:41.167 ================ 00:09:41.167 Number of Power States: 1 00:09:41.167 Current Power State: Power State #0 00:09:41.167 Power State #0: 00:09:41.167 Max Power: 25.00 W 00:09:41.167 Non-Operational State: Operational 00:09:41.167 Entry Latency: 16 microseconds 00:09:41.167 Exit Latency: 4 microseconds 00:09:41.167 Relative Read Throughput: 0 00:09:41.167 Relative Read Latency: 0 00:09:41.167 Relative Write Throughput: 0 00:09:41.167 Relative Write Latency: 0 00:09:41.167 Idle Power: Not Reported 00:09:41.167 Active Power: Not Reported 00:09:41.167 Non-Operational Permissive Mode: Not Supported 00:09:41.167 00:09:41.167 Health Information 00:09:41.167 ================== 00:09:41.167 Critical Warnings: 00:09:41.167 Available Spare Space: OK 00:09:41.167 Temperature: OK 00:09:41.167 Device Reliability: OK 00:09:41.167 Read Only: No 00:09:41.167 Volatile Memory Backup: OK 00:09:41.167 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.167 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.167 Available Spare: 0% 00:09:41.167 Available Spare Threshold: 0% 00:09:41.167 Life Percentage Used: 0% 00:09:41.167 Data Units Read: 1120 00:09:41.167 Data Units Written: 980 00:09:41.167 Host Read Commands: 50854 00:09:41.167 Host Write Commands: 49543 00:09:41.167 Controller Busy Time: 0 minutes 00:09:41.167 Power Cycles: 0 00:09:41.167 Power On Hours: 0 hours 00:09:41.167 Unsafe Shutdowns: 0 00:09:41.167 Unrecoverable Media Errors: 0 00:09:41.167 Lifetime Error Log Entries: 0 00:09:41.167 Warning Temperature Time: 0 minutes 00:09:41.167 Critical Temperature Time: 0 minutes 00:09:41.167 00:09:41.167 Number of Queues 00:09:41.167 ================ 00:09:41.167 Number of I/O Submission Queues: 64 00:09:41.167 Number of I/O Completion Queues: 64 00:09:41.167 00:09:41.167 ZNS Specific Controller Data 00:09:41.167 ============================ 00:09:41.167 Zone Append Size Limit: 0 00:09:41.167 00:09:41.167 00:09:41.167 Active Namespaces 00:09:41.167 ================= 00:09:41.167 Namespace ID:1 00:09:41.167 Error Recovery Timeout: Unlimited 00:09:41.167 Command Set Identifier: NVM (00h) 00:09:41.167 Deallocate: Supported 00:09:41.167 Deallocated/Unwritten Error: Supported 00:09:41.167 Deallocated Read Value: All 0x00 00:09:41.167 Deallocate in Write Zeroes: Not Supported 00:09:41.167 Deallocated Guard Field: 0xFFFF 00:09:41.167 Flush: Supported 00:09:41.167 Reservation: Not Supported 00:09:41.167 Namespace Sharing Capabilities: Private 00:09:41.167 Size (in LBAs): 1310720 (5GiB) 00:09:41.167 Capacity (in LBAs): 1310720 (5GiB) 00:09:41.167 Utilization (in LBAs): 1310720 (5GiB) 00:09:41.167 Thin Provisioning: Not Supported 00:09:41.167 Per-NS Atomic Units: No 00:09:41.167 Maximum Single Source Range Length: 128 00:09:41.167 Maximum Copy Length: 128 00:09:41.167 Maximum Source Range Count: 128 00:09:41.167 NGUID/EUI64 Never Reused: No 00:09:41.167 Namespace Write Protected: No 00:09:41.167 Number of LBA Formats: 8 00:09:41.167 Current LBA Format: LBA Format #04 00:09:41.167 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.167 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.167 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.167 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.167 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.167 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.167 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.167 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.167 00:09:41.167 NVM Specific Namespace Data 00:09:41.167 =========================== 00:09:41.167 Logical Block Storage Tag Mask: 0 00:09:41.167 Protection Information Capabilities: 00:09:41.167 16b Guard Protection Information Storage Tag Support: No 00:09:41.167 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:41.167 Storage Tag Check Read Support: No 00:09:41.167 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.167 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:41.167 08:31:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:41.427 ===================================================== 00:09:41.427 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:41.427 ===================================================== 00:09:41.427 Controller Capabilities/Features 00:09:41.427 ================================ 00:09:41.427 Vendor ID: 1b36 00:09:41.427 Subsystem Vendor ID: 1af4 00:09:41.427 Serial Number: 12342 00:09:41.427 Model Number: QEMU NVMe Ctrl 00:09:41.427 Firmware Version: 8.0.0 00:09:41.427 Recommended Arb Burst: 6 00:09:41.427 IEEE OUI Identifier: 00 54 52 00:09:41.427 Multi-path I/O 00:09:41.427 May have multiple subsystem ports: No 00:09:41.427 May have multiple controllers: No 00:09:41.427 Associated with SR-IOV VF: No 00:09:41.427 Max Data Transfer Size: 524288 00:09:41.427 Max Number of Namespaces: 256 00:09:41.427 Max Number of I/O Queues: 64 00:09:41.427 NVMe Specification Version (VS): 1.4 00:09:41.427 NVMe Specification Version (Identify): 1.4 00:09:41.427 Maximum Queue Entries: 2048 00:09:41.427 Contiguous Queues Required: Yes 00:09:41.427 Arbitration Mechanisms Supported 00:09:41.427 Weighted Round Robin: Not Supported 00:09:41.427 Vendor Specific: Not Supported 00:09:41.427 Reset Timeout: 7500 ms 00:09:41.427 Doorbell Stride: 4 bytes 00:09:41.427 NVM Subsystem Reset: Not Supported 00:09:41.427 Command Sets Supported 00:09:41.427 NVM Command Set: Supported 00:09:41.427 Boot Partition: Not Supported 00:09:41.427 Memory Page Size Minimum: 4096 bytes 00:09:41.427 Memory Page Size Maximum: 65536 bytes 00:09:41.427 Persistent Memory Region: Not Supported 00:09:41.427 Optional Asynchronous Events Supported 00:09:41.427 Namespace Attribute Notices: Supported 00:09:41.427 Firmware Activation Notices: Not Supported 00:09:41.427 ANA Change Notices: Not Supported 00:09:41.427 PLE Aggregate Log Change Notices: Not Supported 00:09:41.427 LBA Status Info Alert Notices: Not Supported 00:09:41.427 EGE Aggregate Log Change Notices: Not Supported 00:09:41.427 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.427 Zone Descriptor Change Notices: Not Supported 00:09:41.427 Discovery Log Change Notices: Not Supported 00:09:41.427 Controller Attributes 00:09:41.427 128-bit Host Identifier: Not Supported 00:09:41.427 Non-Operational Permissive Mode: Not Supported 00:09:41.427 NVM Sets: Not Supported 00:09:41.427 Read Recovery Levels: Not Supported 00:09:41.427 Endurance Groups: Not Supported 00:09:41.427 Predictable Latency Mode: Not Supported 00:09:41.427 Traffic Based Keep ALive: Not Supported 00:09:41.427 Namespace Granularity: Not Supported 00:09:41.427 SQ Associations: Not Supported 00:09:41.427 UUID List: Not Supported 00:09:41.427 Multi-Domain Subsystem: Not Supported 00:09:41.427 Fixed Capacity Management: Not Supported 00:09:41.427 Variable Capacity Management: Not Supported 00:09:41.427 Delete Endurance Group: Not Supported 00:09:41.427 Delete NVM Set: Not Supported 00:09:41.427 Extended LBA Formats Supported: Supported 00:09:41.427 Flexible Data Placement Supported: Not Supported 00:09:41.427 00:09:41.427 Controller Memory Buffer Support 00:09:41.427 ================================ 00:09:41.427 Supported: No 00:09:41.427 00:09:41.427 Persistent Memory Region Support 00:09:41.427 ================================ 00:09:41.427 Supported: No 00:09:41.427 00:09:41.427 Admin Command Set Attributes 00:09:41.427 ============================ 00:09:41.427 Security Send/Receive: Not Supported 00:09:41.427 Format NVM: Supported 00:09:41.427 Firmware Activate/Download: Not Supported 00:09:41.427 Namespace Management: Supported 00:09:41.427 Device Self-Test: Not Supported 00:09:41.427 Directives: Supported 00:09:41.427 NVMe-MI: Not Supported 00:09:41.427 Virtualization Management: Not Supported 00:09:41.427 Doorbell Buffer Config: Supported 00:09:41.427 Get LBA Status Capability: Not Supported 00:09:41.427 Command & Feature Lockdown Capability: Not Supported 00:09:41.427 Abort Command Limit: 4 00:09:41.427 Async Event Request Limit: 4 00:09:41.427 Number of Firmware Slots: N/A 00:09:41.427 Firmware Slot 1 Read-Only: N/A 00:09:41.427 Firmware Activation Without Reset: N/A 00:09:41.427 Multiple Update Detection Support: N/A 00:09:41.427 Firmware Update Granularity: No Information Provided 00:09:41.427 Per-Namespace SMART Log: Yes 00:09:41.427 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.427 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:41.427 Command Effects Log Page: Supported 00:09:41.427 Get Log Page Extended Data: Supported 00:09:41.427 Telemetry Log Pages: Not Supported 00:09:41.427 Persistent Event Log Pages: Not Supported 00:09:41.427 Supported Log Pages Log Page: May Support 00:09:41.427 Commands Supported & Effects Log Page: Not Supported 00:09:41.427 Feature Identifiers & Effects Log Page:May Support 00:09:41.427 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.427 Data Area 4 for Telemetry Log: Not Supported 00:09:41.427 Error Log Page Entries Supported: 1 00:09:41.427 Keep Alive: Not Supported 00:09:41.427 00:09:41.427 NVM Command Set Attributes 00:09:41.427 ========================== 00:09:41.427 Submission Queue Entry Size 00:09:41.427 Max: 64 00:09:41.427 Min: 64 00:09:41.427 Completion Queue Entry Size 00:09:41.427 Max: 16 00:09:41.427 Min: 16 00:09:41.427 Number of Namespaces: 256 00:09:41.427 Compare Command: Supported 00:09:41.427 Write Uncorrectable Command: Not Supported 00:09:41.427 Dataset Management Command: Supported 00:09:41.428 Write Zeroes Command: Supported 00:09:41.428 Set Features Save Field: Supported 00:09:41.428 Reservations: Not Supported 00:09:41.428 Timestamp: Supported 00:09:41.428 Copy: Supported 00:09:41.428 Volatile Write Cache: Present 00:09:41.428 Atomic Write Unit (Normal): 1 00:09:41.428 Atomic Write Unit (PFail): 1 00:09:41.428 Atomic Compare & Write Unit: 1 00:09:41.428 Fused Compare & Write: Not Supported 00:09:41.428 Scatter-Gather List 00:09:41.428 SGL Command Set: Supported 00:09:41.428 SGL Keyed: Not Supported 00:09:41.428 SGL Bit Bucket Descriptor: Not Supported 00:09:41.428 SGL Metadata Pointer: Not Supported 00:09:41.428 Oversized SGL: Not Supported 00:09:41.428 SGL Metadata Address: Not Supported 00:09:41.428 SGL Offset: Not Supported 00:09:41.428 Transport SGL Data Block: Not Supported 00:09:41.428 Replay Protected Memory Block: Not Supported 00:09:41.428 00:09:41.428 Firmware Slot Information 00:09:41.428 ========================= 00:09:41.428 Active slot: 1 00:09:41.428 Slot 1 Firmware Revision: 1.0 00:09:41.428 00:09:41.428 00:09:41.428 Commands Supported and Effects 00:09:41.428 ============================== 00:09:41.428 Admin Commands 00:09:41.428 -------------- 00:09:41.428 Delete I/O Submission Queue (00h): Supported 00:09:41.428 Create I/O Submission Queue (01h): Supported 00:09:41.428 Get Log Page (02h): Supported 00:09:41.428 Delete I/O Completion Queue (04h): Supported 00:09:41.428 Create I/O Completion Queue (05h): Supported 00:09:41.428 Identify (06h): Supported 00:09:41.428 Abort (08h): Supported 00:09:41.428 Set Features (09h): Supported 00:09:41.428 Get Features (0Ah): Supported 00:09:41.428 Asynchronous Event Request (0Ch): Supported 00:09:41.428 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.428 Directive Send (19h): Supported 00:09:41.428 Directive Receive (1Ah): Supported 00:09:41.428 Virtualization Management (1Ch): Supported 00:09:41.428 Doorbell Buffer Config (7Ch): Supported 00:09:41.428 Format NVM (80h): Supported LBA-Change 00:09:41.428 I/O Commands 00:09:41.428 ------------ 00:09:41.428 Flush (00h): Supported LBA-Change 00:09:41.428 Write (01h): Supported LBA-Change 00:09:41.428 Read (02h): Supported 00:09:41.428 Compare (05h): Supported 00:09:41.428 Write Zeroes (08h): Supported LBA-Change 00:09:41.428 Dataset Management (09h): Supported LBA-Change 00:09:41.428 Unknown (0Ch): Supported 00:09:41.428 Unknown (12h): Supported 00:09:41.428 Copy (19h): Supported LBA-Change 00:09:41.428 Unknown (1Dh): Supported LBA-Change 00:09:41.428 00:09:41.428 Error Log 00:09:41.428 ========= 00:09:41.428 00:09:41.428 Arbitration 00:09:41.428 =========== 00:09:41.428 Arbitration Burst: no limit 00:09:41.428 00:09:41.428 Power Management 00:09:41.428 ================ 00:09:41.428 Number of Power States: 1 00:09:41.428 Current Power State: Power State #0 00:09:41.428 Power State #0: 00:09:41.428 Max Power: 25.00 W 00:09:41.428 Non-Operational State: Operational 00:09:41.428 Entry Latency: 16 microseconds 00:09:41.428 Exit Latency: 4 microseconds 00:09:41.428 Relative Read Throughput: 0 00:09:41.428 Relative Read Latency: 0 00:09:41.428 Relative Write Throughput: 0 00:09:41.428 Relative Write Latency: 0 00:09:41.428 Idle Power: Not Reported 00:09:41.428 Active Power: Not Reported 00:09:41.428 Non-Operational Permissive Mode: Not Supported 00:09:41.428 00:09:41.428 Health Information 00:09:41.428 ================== 00:09:41.428 Critical Warnings: 00:09:41.428 Available Spare Space: OK 00:09:41.428 Temperature: OK 00:09:41.428 Device Reliability: OK 00:09:41.428 Read Only: No 00:09:41.428 Volatile Memory Backup: OK 00:09:41.428 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.428 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.428 Available Spare: 0% 00:09:41.428 Available Spare Threshold: 0% 00:09:41.428 Life Percentage Used: 0% 00:09:41.428 Data Units Read: 2366 00:09:41.428 Data Units Written: 2153 00:09:41.428 Host Read Commands: 103926 00:09:41.428 Host Write Commands: 102196 00:09:41.428 Controller Busy Time: 0 minutes 00:09:41.428 Power Cycles: 0 00:09:41.428 Power On Hours: 0 hours 00:09:41.428 Unsafe Shutdowns: 0 00:09:41.428 Unrecoverable Media Errors: 0 00:09:41.428 Lifetime Error Log Entries: 0 00:09:41.428 Warning Temperature Time: 0 minutes 00:09:41.428 Critical Temperature Time: 0 minutes 00:09:41.428 00:09:41.428 Number of Queues 00:09:41.428 ================ 00:09:41.428 Number of I/O Submission Queues: 64 00:09:41.428 Number of I/O Completion Queues: 64 00:09:41.428 00:09:41.428 ZNS Specific Controller Data 00:09:41.428 ============================ 00:09:41.428 Zone Append Size Limit: 0 00:09:41.428 00:09:41.428 00:09:41.428 Active Namespaces 00:09:41.428 ================= 00:09:41.428 Namespace ID:1 00:09:41.428 Error Recovery Timeout: Unlimited 00:09:41.428 Command Set Identifier: NVM (00h) 00:09:41.428 Deallocate: Supported 00:09:41.428 Deallocated/Unwritten Error: Supported 00:09:41.428 Deallocated Read Value: All 0x00 00:09:41.428 Deallocate in Write Zeroes: Not Supported 00:09:41.428 Deallocated Guard Field: 0xFFFF 00:09:41.428 Flush: Supported 00:09:41.428 Reservation: Not Supported 00:09:41.428 Namespace Sharing Capabilities: Private 00:09:41.428 Size (in LBAs): 1048576 (4GiB) 00:09:41.428 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.428 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.428 Thin Provisioning: Not Supported 00:09:41.428 Per-NS Atomic Units: No 00:09:41.428 Maximum Single Source Range Length: 128 00:09:41.428 Maximum Copy Length: 128 00:09:41.428 Maximum Source Range Count: 128 00:09:41.428 NGUID/EUI64 Never Reused: No 00:09:41.428 Namespace Write Protected: No 00:09:41.428 Number of LBA Formats: 8 00:09:41.428 Current LBA Format: LBA Format #04 00:09:41.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.428 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.428 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.428 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.428 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.428 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.428 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.428 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.428 00:09:41.428 NVM Specific Namespace Data 00:09:41.428 =========================== 00:09:41.428 Logical Block Storage Tag Mask: 0 00:09:41.428 Protection Information Capabilities: 00:09:41.428 16b Guard Protection Information Storage Tag Support: No 00:09:41.428 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:41.428 Storage Tag Check Read Support: No 00:09:41.428 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.428 Namespace ID:2 00:09:41.428 Error Recovery Timeout: Unlimited 00:09:41.428 Command Set Identifier: NVM (00h) 00:09:41.428 Deallocate: Supported 00:09:41.428 Deallocated/Unwritten Error: Supported 00:09:41.428 Deallocated Read Value: All 0x00 00:09:41.428 Deallocate in Write Zeroes: Not Supported 00:09:41.428 Deallocated Guard Field: 0xFFFF 00:09:41.428 Flush: Supported 00:09:41.428 Reservation: Not Supported 00:09:41.428 Namespace Sharing Capabilities: Private 00:09:41.428 Size (in LBAs): 1048576 (4GiB) 00:09:41.428 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.428 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.428 Thin Provisioning: Not Supported 00:09:41.428 Per-NS Atomic Units: No 00:09:41.428 Maximum Single Source Range Length: 128 00:09:41.428 Maximum Copy Length: 128 00:09:41.428 Maximum Source Range Count: 128 00:09:41.428 NGUID/EUI64 Never Reused: No 00:09:41.428 Namespace Write Protected: No 00:09:41.428 Number of LBA Formats: 8 00:09:41.428 Current LBA Format: LBA Format #04 00:09:41.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.428 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.428 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.429 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.429 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.429 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.429 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.429 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.429 00:09:41.429 NVM Specific Namespace Data 00:09:41.429 =========================== 00:09:41.429 Logical Block Storage Tag Mask: 0 00:09:41.429 Protection Information Capabilities: 00:09:41.429 16b Guard Protection Information Storage Tag Support: No 00:09:41.429 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:41.429 Storage Tag Check Read Support: No 00:09:41.429 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Namespace ID:3 00:09:41.429 Error Recovery Timeout: Unlimited 00:09:41.429 Command Set Identifier: NVM (00h) 00:09:41.429 Deallocate: Supported 00:09:41.429 Deallocated/Unwritten Error: Supported 00:09:41.429 Deallocated Read Value: All 0x00 00:09:41.429 Deallocate in Write Zeroes: Not Supported 00:09:41.429 Deallocated Guard Field: 0xFFFF 00:09:41.429 Flush: Supported 00:09:41.429 Reservation: Not Supported 00:09:41.429 Namespace Sharing Capabilities: Private 00:09:41.429 Size (in LBAs): 1048576 (4GiB) 00:09:41.429 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.429 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.429 Thin Provisioning: Not Supported 00:09:41.429 Per-NS Atomic Units: No 00:09:41.429 Maximum Single Source Range Length: 128 00:09:41.429 Maximum Copy Length: 128 00:09:41.429 Maximum Source Range Count: 128 00:09:41.429 NGUID/EUI64 Never Reused: No 00:09:41.429 Namespace Write Protected: No 00:09:41.429 Number of LBA Formats: 8 00:09:41.429 Current LBA Format: LBA Format #04 00:09:41.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.429 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.429 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.429 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.429 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.429 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.429 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.429 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.429 00:09:41.429 NVM Specific Namespace Data 00:09:41.429 =========================== 00:09:41.429 Logical Block Storage Tag Mask: 0 00:09:41.429 Protection Information Capabilities: 00:09:41.429 16b Guard Protection Information Storage Tag Support: No 00:09:41.429 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:41.429 Storage Tag Check Read Support: No 00:09:41.429 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.429 08:31:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:41.429 08:31:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:41.689 ===================================================== 00:09:41.689 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:41.689 ===================================================== 00:09:41.689 Controller Capabilities/Features 00:09:41.689 ================================ 00:09:41.689 Vendor ID: 1b36 00:09:41.689 Subsystem Vendor ID: 1af4 00:09:41.689 Serial Number: 12343 00:09:41.689 Model Number: QEMU NVMe Ctrl 00:09:41.689 Firmware Version: 8.0.0 00:09:41.689 Recommended Arb Burst: 6 00:09:41.689 IEEE OUI Identifier: 00 54 52 00:09:41.689 Multi-path I/O 00:09:41.689 May have multiple subsystem ports: No 00:09:41.689 May have multiple controllers: Yes 00:09:41.689 Associated with SR-IOV VF: No 00:09:41.689 Max Data Transfer Size: 524288 00:09:41.689 Max Number of Namespaces: 256 00:09:41.689 Max Number of I/O Queues: 64 00:09:41.689 NVMe Specification Version (VS): 1.4 00:09:41.689 NVMe Specification Version (Identify): 1.4 00:09:41.689 Maximum Queue Entries: 2048 00:09:41.689 Contiguous Queues Required: Yes 00:09:41.689 Arbitration Mechanisms Supported 00:09:41.689 Weighted Round Robin: Not Supported 00:09:41.689 Vendor Specific: Not Supported 00:09:41.689 Reset Timeout: 7500 ms 00:09:41.689 Doorbell Stride: 4 bytes 00:09:41.689 NVM Subsystem Reset: Not Supported 00:09:41.689 Command Sets Supported 00:09:41.689 NVM Command Set: Supported 00:09:41.689 Boot Partition: Not Supported 00:09:41.689 Memory Page Size Minimum: 4096 bytes 00:09:41.689 Memory Page Size Maximum: 65536 bytes 00:09:41.689 Persistent Memory Region: Not Supported 00:09:41.689 Optional Asynchronous Events Supported 00:09:41.689 Namespace Attribute Notices: Supported 00:09:41.689 Firmware Activation Notices: Not Supported 00:09:41.689 ANA Change Notices: Not Supported 00:09:41.689 PLE Aggregate Log Change Notices: Not Supported 00:09:41.689 LBA Status Info Alert Notices: Not Supported 00:09:41.689 EGE Aggregate Log Change Notices: Not Supported 00:09:41.689 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.689 Zone Descriptor Change Notices: Not Supported 00:09:41.689 Discovery Log Change Notices: Not Supported 00:09:41.689 Controller Attributes 00:09:41.689 128-bit Host Identifier: Not Supported 00:09:41.689 Non-Operational Permissive Mode: Not Supported 00:09:41.689 NVM Sets: Not Supported 00:09:41.689 Read Recovery Levels: Not Supported 00:09:41.689 Endurance Groups: Supported 00:09:41.689 Predictable Latency Mode: Not Supported 00:09:41.689 Traffic Based Keep ALive: Not Supported 00:09:41.689 Namespace Granularity: Not Supported 00:09:41.689 SQ Associations: Not Supported 00:09:41.689 UUID List: Not Supported 00:09:41.689 Multi-Domain Subsystem: Not Supported 00:09:41.689 Fixed Capacity Management: Not Supported 00:09:41.689 Variable Capacity Management: Not Supported 00:09:41.689 Delete Endurance Group: Not Supported 00:09:41.689 Delete NVM Set: Not Supported 00:09:41.689 Extended LBA Formats Supported: Supported 00:09:41.689 Flexible Data Placement Supported: Supported 00:09:41.689 00:09:41.689 Controller Memory Buffer Support 00:09:41.689 ================================ 00:09:41.689 Supported: No 00:09:41.689 00:09:41.689 Persistent Memory Region Support 00:09:41.689 ================================ 00:09:41.689 Supported: No 00:09:41.689 00:09:41.689 Admin Command Set Attributes 00:09:41.689 ============================ 00:09:41.689 Security Send/Receive: Not Supported 00:09:41.689 Format NVM: Supported 00:09:41.689 Firmware Activate/Download: Not Supported 00:09:41.689 Namespace Management: Supported 00:09:41.689 Device Self-Test: Not Supported 00:09:41.689 Directives: Supported 00:09:41.689 NVMe-MI: Not Supported 00:09:41.689 Virtualization Management: Not Supported 00:09:41.689 Doorbell Buffer Config: Supported 00:09:41.689 Get LBA Status Capability: Not Supported 00:09:41.689 Command & Feature Lockdown Capability: Not Supported 00:09:41.689 Abort Command Limit: 4 00:09:41.689 Async Event Request Limit: 4 00:09:41.689 Number of Firmware Slots: N/A 00:09:41.689 Firmware Slot 1 Read-Only: N/A 00:09:41.689 Firmware Activation Without Reset: N/A 00:09:41.689 Multiple Update Detection Support: N/A 00:09:41.689 Firmware Update Granularity: No Information Provided 00:09:41.689 Per-Namespace SMART Log: Yes 00:09:41.689 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.689 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:41.689 Command Effects Log Page: Supported 00:09:41.689 Get Log Page Extended Data: Supported 00:09:41.689 Telemetry Log Pages: Not Supported 00:09:41.689 Persistent Event Log Pages: Not Supported 00:09:41.689 Supported Log Pages Log Page: May Support 00:09:41.689 Commands Supported & Effects Log Page: Not Supported 00:09:41.689 Feature Identifiers & Effects Log Page:May Support 00:09:41.689 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.689 Data Area 4 for Telemetry Log: Not Supported 00:09:41.689 Error Log Page Entries Supported: 1 00:09:41.689 Keep Alive: Not Supported 00:09:41.689 00:09:41.689 NVM Command Set Attributes 00:09:41.689 ========================== 00:09:41.689 Submission Queue Entry Size 00:09:41.689 Max: 64 00:09:41.689 Min: 64 00:09:41.689 Completion Queue Entry Size 00:09:41.689 Max: 16 00:09:41.689 Min: 16 00:09:41.689 Number of Namespaces: 256 00:09:41.689 Compare Command: Supported 00:09:41.689 Write Uncorrectable Command: Not Supported 00:09:41.689 Dataset Management Command: Supported 00:09:41.689 Write Zeroes Command: Supported 00:09:41.689 Set Features Save Field: Supported 00:09:41.689 Reservations: Not Supported 00:09:41.689 Timestamp: Supported 00:09:41.689 Copy: Supported 00:09:41.689 Volatile Write Cache: Present 00:09:41.689 Atomic Write Unit (Normal): 1 00:09:41.689 Atomic Write Unit (PFail): 1 00:09:41.689 Atomic Compare & Write Unit: 1 00:09:41.689 Fused Compare & Write: Not Supported 00:09:41.689 Scatter-Gather List 00:09:41.689 SGL Command Set: Supported 00:09:41.689 SGL Keyed: Not Supported 00:09:41.689 SGL Bit Bucket Descriptor: Not Supported 00:09:41.689 SGL Metadata Pointer: Not Supported 00:09:41.689 Oversized SGL: Not Supported 00:09:41.689 SGL Metadata Address: Not Supported 00:09:41.689 SGL Offset: Not Supported 00:09:41.689 Transport SGL Data Block: Not Supported 00:09:41.689 Replay Protected Memory Block: Not Supported 00:09:41.689 00:09:41.689 Firmware Slot Information 00:09:41.689 ========================= 00:09:41.689 Active slot: 1 00:09:41.689 Slot 1 Firmware Revision: 1.0 00:09:41.689 00:09:41.689 00:09:41.689 Commands Supported and Effects 00:09:41.689 ============================== 00:09:41.689 Admin Commands 00:09:41.689 -------------- 00:09:41.689 Delete I/O Submission Queue (00h): Supported 00:09:41.689 Create I/O Submission Queue (01h): Supported 00:09:41.689 Get Log Page (02h): Supported 00:09:41.689 Delete I/O Completion Queue (04h): Supported 00:09:41.689 Create I/O Completion Queue (05h): Supported 00:09:41.689 Identify (06h): Supported 00:09:41.689 Abort (08h): Supported 00:09:41.689 Set Features (09h): Supported 00:09:41.689 Get Features (0Ah): Supported 00:09:41.690 Asynchronous Event Request (0Ch): Supported 00:09:41.690 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.690 Directive Send (19h): Supported 00:09:41.690 Directive Receive (1Ah): Supported 00:09:41.690 Virtualization Management (1Ch): Supported 00:09:41.690 Doorbell Buffer Config (7Ch): Supported 00:09:41.690 Format NVM (80h): Supported LBA-Change 00:09:41.690 I/O Commands 00:09:41.690 ------------ 00:09:41.690 Flush (00h): Supported LBA-Change 00:09:41.690 Write (01h): Supported LBA-Change 00:09:41.690 Read (02h): Supported 00:09:41.690 Compare (05h): Supported 00:09:41.690 Write Zeroes (08h): Supported LBA-Change 00:09:41.690 Dataset Management (09h): Supported LBA-Change 00:09:41.690 Unknown (0Ch): Supported 00:09:41.690 Unknown (12h): Supported 00:09:41.690 Copy (19h): Supported LBA-Change 00:09:41.690 Unknown (1Dh): Supported LBA-Change 00:09:41.690 00:09:41.690 Error Log 00:09:41.690 ========= 00:09:41.690 00:09:41.690 Arbitration 00:09:41.690 =========== 00:09:41.690 Arbitration Burst: no limit 00:09:41.690 00:09:41.690 Power Management 00:09:41.690 ================ 00:09:41.690 Number of Power States: 1 00:09:41.690 Current Power State: Power State #0 00:09:41.690 Power State #0: 00:09:41.690 Max Power: 25.00 W 00:09:41.690 Non-Operational State: Operational 00:09:41.690 Entry Latency: 16 microseconds 00:09:41.690 Exit Latency: 4 microseconds 00:09:41.690 Relative Read Throughput: 0 00:09:41.690 Relative Read Latency: 0 00:09:41.690 Relative Write Throughput: 0 00:09:41.690 Relative Write Latency: 0 00:09:41.690 Idle Power: Not Reported 00:09:41.690 Active Power: Not Reported 00:09:41.690 Non-Operational Permissive Mode: Not Supported 00:09:41.690 00:09:41.690 Health Information 00:09:41.690 ================== 00:09:41.690 Critical Warnings: 00:09:41.690 Available Spare Space: OK 00:09:41.690 Temperature: OK 00:09:41.690 Device Reliability: OK 00:09:41.690 Read Only: No 00:09:41.690 Volatile Memory Backup: OK 00:09:41.690 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.690 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.690 Available Spare: 0% 00:09:41.690 Available Spare Threshold: 0% 00:09:41.690 Life Percentage Used: 0% 00:09:41.690 Data Units Read: 932 00:09:41.690 Data Units Written: 861 00:09:41.690 Host Read Commands: 35800 00:09:41.690 Host Write Commands: 35223 00:09:41.690 Controller Busy Time: 0 minutes 00:09:41.690 Power Cycles: 0 00:09:41.690 Power On Hours: 0 hours 00:09:41.690 Unsafe Shutdowns: 0 00:09:41.690 Unrecoverable Media Errors: 0 00:09:41.690 Lifetime Error Log Entries: 0 00:09:41.690 Warning Temperature Time: 0 minutes 00:09:41.690 Critical Temperature Time: 0 minutes 00:09:41.690 00:09:41.690 Number of Queues 00:09:41.690 ================ 00:09:41.690 Number of I/O Submission Queues: 64 00:09:41.690 Number of I/O Completion Queues: 64 00:09:41.690 00:09:41.690 ZNS Specific Controller Data 00:09:41.690 ============================ 00:09:41.690 Zone Append Size Limit: 0 00:09:41.690 00:09:41.690 00:09:41.690 Active Namespaces 00:09:41.690 ================= 00:09:41.690 Namespace ID:1 00:09:41.690 Error Recovery Timeout: Unlimited 00:09:41.690 Command Set Identifier: NVM (00h) 00:09:41.690 Deallocate: Supported 00:09:41.690 Deallocated/Unwritten Error: Supported 00:09:41.690 Deallocated Read Value: All 0x00 00:09:41.690 Deallocate in Write Zeroes: Not Supported 00:09:41.690 Deallocated Guard Field: 0xFFFF 00:09:41.690 Flush: Supported 00:09:41.690 Reservation: Not Supported 00:09:41.690 Namespace Sharing Capabilities: Multiple Controllers 00:09:41.690 Size (in LBAs): 262144 (1GiB) 00:09:41.690 Capacity (in LBAs): 262144 (1GiB) 00:09:41.690 Utilization (in LBAs): 262144 (1GiB) 00:09:41.690 Thin Provisioning: Not Supported 00:09:41.690 Per-NS Atomic Units: No 00:09:41.690 Maximum Single Source Range Length: 128 00:09:41.690 Maximum Copy Length: 128 00:09:41.690 Maximum Source Range Count: 128 00:09:41.690 NGUID/EUI64 Never Reused: No 00:09:41.690 Namespace Write Protected: No 00:09:41.690 Endurance group ID: 1 00:09:41.690 Number of LBA Formats: 8 00:09:41.690 Current LBA Format: LBA Format #04 00:09:41.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.690 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.690 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.690 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.690 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.690 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.690 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.690 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.690 00:09:41.690 Get Feature FDP: 00:09:41.690 ================ 00:09:41.690 Enabled: Yes 00:09:41.690 FDP configuration index: 0 00:09:41.690 00:09:41.690 FDP configurations log page 00:09:41.690 =========================== 00:09:41.690 Number of FDP configurations: 1 00:09:41.690 Version: 0 00:09:41.690 Size: 112 00:09:41.690 FDP Configuration Descriptor: 0 00:09:41.690 Descriptor Size: 96 00:09:41.690 Reclaim Group Identifier format: 2 00:09:41.690 FDP Volatile Write Cache: Not Present 00:09:41.690 FDP Configuration: Valid 00:09:41.690 Vendor Specific Size: 0 00:09:41.690 Number of Reclaim Groups: 2 00:09:41.690 Number of Recalim Unit Handles: 8 00:09:41.690 Max Placement Identifiers: 128 00:09:41.690 Number of Namespaces Suppprted: 256 00:09:41.690 Reclaim unit Nominal Size: 6000000 bytes 00:09:41.690 Estimated Reclaim Unit Time Limit: Not Reported 00:09:41.690 RUH Desc #000: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #001: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #002: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #003: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #004: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #005: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #006: RUH Type: Initially Isolated 00:09:41.690 RUH Desc #007: RUH Type: Initially Isolated 00:09:41.690 00:09:41.690 FDP reclaim unit handle usage log page 00:09:41.690 ====================================== 00:09:41.690 Number of Reclaim Unit Handles: 8 00:09:41.690 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:41.690 RUH Usage Desc #001: RUH Attributes: Unused 00:09:41.690 RUH Usage Desc #002: RUH Attributes: Unused 00:09:41.690 RUH Usage Desc #003: RUH Attributes: Unused 00:09:41.690 RUH Usage Desc #004: RUH Attributes: Unused 00:09:41.690 RUH Usage Desc #005: RUH Attributes: Unused 00:09:41.690 RUH Usage Desc #006: RUH Attributes: Unused 00:09:41.690 RUH Usage Desc #007: RUH Attributes: Unused 00:09:41.690 00:09:41.690 FDP statistics log page 00:09:41.690 ======================= 00:09:41.690 Host bytes with metadata written: 533241856 00:09:41.690 Media bytes with metadata written: 533299200 00:09:41.690 Media bytes erased: 0 00:09:41.690 00:09:41.690 FDP events log page 00:09:41.690 =================== 00:09:41.690 Number of FDP events: 0 00:09:41.690 00:09:41.690 NVM Specific Namespace Data 00:09:41.690 =========================== 00:09:41.690 Logical Block Storage Tag Mask: 0 00:09:41.690 Protection Information Capabilities: 00:09:41.690 16b Guard Protection Information Storage Tag Support: No 00:09:41.690 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:41.690 Storage Tag Check Read Support: No 00:09:41.690 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:41.690 00:09:41.690 real 0m1.342s 00:09:41.690 user 0m0.496s 00:09:41.690 sys 0m0.657s 00:09:41.690 08:31:03 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.690 08:31:03 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:41.690 ************************************ 00:09:41.690 END TEST nvme_identify 00:09:41.690 ************************************ 00:09:41.690 08:31:03 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:41.690 08:31:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.690 08:31:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.690 08:31:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.690 ************************************ 00:09:41.690 START TEST nvme_perf 00:09:41.690 ************************************ 00:09:41.691 08:31:03 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:41.691 08:31:03 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:43.070 Initializing NVMe Controllers 00:09:43.070 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:43.070 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:43.070 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:43.070 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:43.070 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:43.070 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:43.070 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:43.070 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:43.070 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:43.070 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:43.070 Initialization complete. Launching workers. 00:09:43.070 ======================================================== 00:09:43.070 Latency(us) 00:09:43.070 Device Information : IOPS MiB/s Average min max 00:09:43.070 PCIE (0000:00:10.0) NSID 1 from core 0: 15471.74 181.31 8278.51 6926.39 38105.55 00:09:43.070 PCIE (0000:00:11.0) NSID 1 from core 0: 15471.74 181.31 8273.82 7020.94 37015.43 00:09:43.070 PCIE (0000:00:13.0) NSID 1 from core 0: 15471.74 181.31 8267.85 6454.75 37004.89 00:09:43.070 PCIE (0000:00:12.0) NSID 1 from core 0: 15471.74 181.31 8261.79 6067.44 36539.26 00:09:43.070 PCIE (0000:00:12.0) NSID 2 from core 0: 15471.74 181.31 8255.49 5665.22 35977.86 00:09:43.070 PCIE (0000:00:12.0) NSID 3 from core 0: 15535.67 182.06 8215.13 5301.68 31033.66 00:09:43.070 ======================================================== 00:09:43.070 Total : 92894.37 1088.61 8258.74 5301.68 38105.55 00:09:43.070 00:09:43.070 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:43.070 ================================================================================= 00:09:43.070 1.00000% : 7154.585us 00:09:43.070 10.00000% : 7440.769us 00:09:43.070 25.00000% : 7726.952us 00:09:43.070 50.00000% : 8013.135us 00:09:43.070 75.00000% : 8356.555us 00:09:43.070 90.00000% : 8642.739us 00:09:43.070 95.00000% : 9043.396us 00:09:43.070 98.00000% : 11046.679us 00:09:43.070 99.00000% : 11733.520us 00:09:43.070 99.50000% : 31594.648us 00:09:43.070 99.90000% : 37547.263us 00:09:43.070 99.99000% : 38234.103us 00:09:43.070 99.99900% : 38234.103us 00:09:43.070 99.99990% : 38234.103us 00:09:43.070 99.99999% : 38234.103us 00:09:43.070 00:09:43.070 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:43.070 ================================================================================= 00:09:43.070 1.00000% : 7240.440us 00:09:43.070 10.00000% : 7498.005us 00:09:43.070 25.00000% : 7726.952us 00:09:43.070 50.00000% : 8013.135us 00:09:43.070 75.00000% : 8299.319us 00:09:43.070 90.00000% : 8585.502us 00:09:43.070 95.00000% : 9043.396us 00:09:43.070 98.00000% : 11046.679us 00:09:43.070 99.00000% : 11847.993us 00:09:43.070 99.50000% : 31365.701us 00:09:43.070 99.90000% : 36860.423us 00:09:43.070 99.99000% : 37089.369us 00:09:43.070 99.99900% : 37089.369us 00:09:43.070 99.99990% : 37089.369us 00:09:43.070 99.99999% : 37089.369us 00:09:43.070 00:09:43.070 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:43.070 ================================================================================= 00:09:43.070 1.00000% : 7183.203us 00:09:43.070 10.00000% : 7498.005us 00:09:43.070 25.00000% : 7726.952us 00:09:43.070 50.00000% : 8013.135us 00:09:43.070 75.00000% : 8299.319us 00:09:43.070 90.00000% : 8585.502us 00:09:43.070 95.00000% : 8986.159us 00:09:43.070 98.00000% : 10874.969us 00:09:43.070 99.00000% : 11447.336us 00:09:43.070 99.50000% : 31594.648us 00:09:43.070 99.90000% : 36860.423us 00:09:43.071 99.99000% : 37089.369us 00:09:43.071 99.99900% : 37089.369us 00:09:43.071 99.99990% : 37089.369us 00:09:43.071 99.99999% : 37089.369us 00:09:43.071 00:09:43.071 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:43.071 ================================================================================= 00:09:43.071 1.00000% : 7183.203us 00:09:43.071 10.00000% : 7498.005us 00:09:43.071 25.00000% : 7726.952us 00:09:43.071 50.00000% : 8013.135us 00:09:43.071 75.00000% : 8299.319us 00:09:43.071 90.00000% : 8585.502us 00:09:43.071 95.00000% : 8986.159us 00:09:43.071 98.00000% : 10932.206us 00:09:43.071 99.00000% : 11504.573us 00:09:43.071 99.50000% : 31136.755us 00:09:43.071 99.90000% : 36402.529us 00:09:43.071 99.99000% : 36631.476us 00:09:43.071 99.99900% : 36631.476us 00:09:43.071 99.99990% : 36631.476us 00:09:43.071 99.99999% : 36631.476us 00:09:43.071 00:09:43.071 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:43.071 ================================================================================= 00:09:43.071 1.00000% : 7183.203us 00:09:43.071 10.00000% : 7498.005us 00:09:43.071 25.00000% : 7726.952us 00:09:43.071 50.00000% : 8013.135us 00:09:43.071 75.00000% : 8299.319us 00:09:43.071 90.00000% : 8585.502us 00:09:43.071 95.00000% : 8928.922us 00:09:43.071 98.00000% : 10989.443us 00:09:43.071 99.00000% : 11619.046us 00:09:43.071 99.50000% : 30907.808us 00:09:43.071 99.90000% : 35944.636us 00:09:43.071 99.99000% : 36173.583us 00:09:43.071 99.99900% : 36173.583us 00:09:43.071 99.99990% : 36173.583us 00:09:43.071 99.99999% : 36173.583us 00:09:43.071 00:09:43.071 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:43.071 ================================================================================= 00:09:43.071 1.00000% : 7183.203us 00:09:43.071 10.00000% : 7498.005us 00:09:43.071 25.00000% : 7726.952us 00:09:43.071 50.00000% : 8013.135us 00:09:43.071 75.00000% : 8299.319us 00:09:43.071 90.00000% : 8585.502us 00:09:43.071 95.00000% : 8986.159us 00:09:43.071 98.00000% : 11046.679us 00:09:43.071 99.00000% : 11619.046us 00:09:43.071 99.50000% : 24955.193us 00:09:43.071 99.90000% : 30907.808us 00:09:43.071 99.99000% : 31136.755us 00:09:43.071 99.99900% : 31136.755us 00:09:43.071 99.99990% : 31136.755us 00:09:43.071 99.99999% : 31136.755us 00:09:43.071 00:09:43.071 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:43.071 ============================================================================== 00:09:43.071 Range in us Cumulative IO count 00:09:43.071 6925.638 - 6954.257: 0.0194% ( 3) 00:09:43.071 6954.257 - 6982.875: 0.0839% ( 10) 00:09:43.071 6982.875 - 7011.493: 0.1420% ( 9) 00:09:43.071 7011.493 - 7040.112: 0.2905% ( 23) 00:09:43.071 7040.112 - 7068.730: 0.4197% ( 20) 00:09:43.071 7068.730 - 7097.348: 0.6134% ( 30) 00:09:43.071 7097.348 - 7125.967: 0.8781% ( 41) 00:09:43.071 7125.967 - 7154.585: 1.2203% ( 53) 00:09:43.071 7154.585 - 7183.203: 1.6723% ( 70) 00:09:43.071 7183.203 - 7211.822: 2.1501% ( 74) 00:09:43.071 7211.822 - 7240.440: 2.7957% ( 100) 00:09:43.071 7240.440 - 7269.059: 3.5382% ( 115) 00:09:43.071 7269.059 - 7297.677: 4.1774% ( 99) 00:09:43.071 7297.677 - 7326.295: 5.2105% ( 160) 00:09:43.071 7326.295 - 7383.532: 7.4509% ( 347) 00:09:43.071 7383.532 - 7440.769: 10.1498% ( 418) 00:09:43.071 7440.769 - 7498.005: 13.0101% ( 443) 00:09:43.071 7498.005 - 7555.242: 16.3352% ( 515) 00:09:43.071 7555.242 - 7612.479: 20.1575% ( 592) 00:09:43.071 7612.479 - 7669.715: 24.3995% ( 657) 00:09:43.071 7669.715 - 7726.952: 28.7513% ( 674) 00:09:43.071 7726.952 - 7784.189: 33.2193% ( 692) 00:09:43.071 7784.189 - 7841.425: 37.8228% ( 713) 00:09:43.071 7841.425 - 7898.662: 42.4587% ( 718) 00:09:43.071 7898.662 - 7955.899: 47.2366% ( 740) 00:09:43.071 7955.899 - 8013.135: 51.9886% ( 736) 00:09:43.071 8013.135 - 8070.372: 56.5535% ( 707) 00:09:43.071 8070.372 - 8127.609: 61.2668% ( 730) 00:09:43.071 8127.609 - 8184.845: 65.7864% ( 700) 00:09:43.071 8184.845 - 8242.082: 70.0155% ( 655) 00:09:43.071 8242.082 - 8299.319: 74.1865% ( 646) 00:09:43.071 8299.319 - 8356.555: 77.9765% ( 587) 00:09:43.071 8356.555 - 8413.792: 81.3017% ( 515) 00:09:43.071 8413.792 - 8471.029: 84.2459% ( 456) 00:09:43.071 8471.029 - 8528.266: 86.7769% ( 392) 00:09:43.071 8528.266 - 8585.502: 88.9398% ( 335) 00:09:43.071 8585.502 - 8642.739: 90.5927% ( 256) 00:09:43.071 8642.739 - 8699.976: 92.0390% ( 224) 00:09:43.071 8699.976 - 8757.212: 93.0204% ( 152) 00:09:43.071 8757.212 - 8814.449: 93.7177% ( 108) 00:09:43.071 8814.449 - 8871.686: 94.2278% ( 79) 00:09:43.071 8871.686 - 8928.922: 94.6475% ( 65) 00:09:43.071 8928.922 - 8986.159: 94.9122% ( 41) 00:09:43.071 8986.159 - 9043.396: 95.1511% ( 37) 00:09:43.071 9043.396 - 9100.632: 95.3448% ( 30) 00:09:43.071 9100.632 - 9157.869: 95.4610% ( 18) 00:09:43.071 9157.869 - 9215.106: 95.5837% ( 19) 00:09:43.071 9215.106 - 9272.342: 95.7386% ( 24) 00:09:43.071 9272.342 - 9329.579: 95.8807% ( 22) 00:09:43.071 9329.579 - 9386.816: 96.0292% ( 23) 00:09:43.071 9386.816 - 9444.052: 96.1260% ( 15) 00:09:43.071 9444.052 - 9501.289: 96.2164% ( 14) 00:09:43.071 9501.289 - 9558.526: 96.2745% ( 9) 00:09:43.071 9558.526 - 9615.762: 96.3326% ( 9) 00:09:43.071 9615.762 - 9672.999: 96.3714% ( 6) 00:09:43.071 9672.999 - 9730.236: 96.4166% ( 7) 00:09:43.071 9730.236 - 9787.472: 96.4747% ( 9) 00:09:43.071 9787.472 - 9844.709: 96.5457% ( 11) 00:09:43.071 9844.709 - 9901.946: 96.5780% ( 5) 00:09:43.071 9901.946 - 9959.183: 96.6103% ( 5) 00:09:43.071 9959.183 - 10016.419: 96.6684% ( 9) 00:09:43.071 10016.419 - 10073.656: 96.6942% ( 4) 00:09:43.071 10073.656 - 10130.893: 96.7200% ( 4) 00:09:43.071 10130.893 - 10188.129: 96.7846% ( 10) 00:09:43.071 10188.129 - 10245.366: 96.8621% ( 12) 00:09:43.071 10245.366 - 10302.603: 96.9654% ( 16) 00:09:43.071 10302.603 - 10359.839: 97.0558% ( 14) 00:09:43.071 10359.839 - 10417.076: 97.1204% ( 10) 00:09:43.071 10417.076 - 10474.313: 97.1914% ( 11) 00:09:43.071 10474.313 - 10531.549: 97.2559% ( 10) 00:09:43.071 10531.549 - 10588.786: 97.3205% ( 10) 00:09:43.071 10588.786 - 10646.023: 97.4044% ( 13) 00:09:43.071 10646.023 - 10703.259: 97.4755% ( 11) 00:09:43.071 10703.259 - 10760.496: 97.5723% ( 15) 00:09:43.071 10760.496 - 10817.733: 97.6498% ( 12) 00:09:43.071 10817.733 - 10874.969: 97.7466% ( 15) 00:09:43.071 10874.969 - 10932.206: 97.8370% ( 14) 00:09:43.071 10932.206 - 10989.443: 97.9339% ( 15) 00:09:43.071 10989.443 - 11046.679: 98.0436% ( 17) 00:09:43.071 11046.679 - 11103.916: 98.1534% ( 17) 00:09:43.071 11103.916 - 11161.153: 98.2632% ( 17) 00:09:43.071 11161.153 - 11218.390: 98.3536% ( 14) 00:09:43.071 11218.390 - 11275.626: 98.4569% ( 16) 00:09:43.071 11275.626 - 11332.863: 98.5408% ( 13) 00:09:43.071 11332.863 - 11390.100: 98.6312% ( 14) 00:09:43.071 11390.100 - 11447.336: 98.7410% ( 17) 00:09:43.071 11447.336 - 11504.573: 98.8184% ( 12) 00:09:43.071 11504.573 - 11561.810: 98.8959% ( 12) 00:09:43.071 11561.810 - 11619.046: 98.9540% ( 9) 00:09:43.071 11619.046 - 11676.283: 98.9992% ( 7) 00:09:43.071 11676.283 - 11733.520: 99.0509% ( 8) 00:09:43.071 11733.520 - 11790.756: 99.0638% ( 2) 00:09:43.071 11790.756 - 11847.993: 99.0832% ( 3) 00:09:43.071 11847.993 - 11905.230: 99.1090% ( 4) 00:09:43.071 11905.230 - 11962.466: 99.1219% ( 2) 00:09:43.071 11962.466 - 12019.703: 99.1413% ( 3) 00:09:43.071 12019.703 - 12076.940: 99.1606% ( 3) 00:09:43.071 12076.940 - 12134.176: 99.1736% ( 2) 00:09:43.071 30449.914 - 30678.861: 99.1865% ( 2) 00:09:43.071 30678.861 - 30907.808: 99.2639% ( 12) 00:09:43.071 30907.808 - 31136.755: 99.3543% ( 14) 00:09:43.071 31136.755 - 31365.701: 99.4318% ( 12) 00:09:43.071 31365.701 - 31594.648: 99.5028% ( 11) 00:09:43.071 31594.648 - 31823.595: 99.5803% ( 12) 00:09:43.071 31823.595 - 32052.541: 99.5868% ( 1) 00:09:43.071 36402.529 - 36631.476: 99.6126% ( 4) 00:09:43.071 36631.476 - 36860.423: 99.6965% ( 13) 00:09:43.071 36860.423 - 37089.369: 99.7611% ( 10) 00:09:43.071 37089.369 - 37318.316: 99.8386% ( 12) 00:09:43.071 37318.316 - 37547.263: 99.9032% ( 10) 00:09:43.071 37776.210 - 38005.156: 99.9742% ( 11) 00:09:43.071 38005.156 - 38234.103: 100.0000% ( 4) 00:09:43.071 00:09:43.071 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:43.071 ============================================================================== 00:09:43.071 Range in us Cumulative IO count 00:09:43.071 7011.493 - 7040.112: 0.0258% ( 4) 00:09:43.071 7040.112 - 7068.730: 0.0839% ( 9) 00:09:43.071 7068.730 - 7097.348: 0.1614% ( 12) 00:09:43.071 7097.348 - 7125.967: 0.2841% ( 19) 00:09:43.071 7125.967 - 7154.585: 0.4132% ( 20) 00:09:43.071 7154.585 - 7183.203: 0.6263% ( 33) 00:09:43.071 7183.203 - 7211.822: 0.9104% ( 44) 00:09:43.071 7211.822 - 7240.440: 1.2978% ( 60) 00:09:43.071 7240.440 - 7269.059: 1.7756% ( 74) 00:09:43.072 7269.059 - 7297.677: 2.3308% ( 86) 00:09:43.072 7297.677 - 7326.295: 3.1637% ( 129) 00:09:43.072 7326.295 - 7383.532: 4.9264% ( 273) 00:09:43.072 7383.532 - 7440.769: 7.2508% ( 360) 00:09:43.072 7440.769 - 7498.005: 10.1046% ( 442) 00:09:43.072 7498.005 - 7555.242: 13.4104% ( 512) 00:09:43.072 7555.242 - 7612.479: 17.1165% ( 574) 00:09:43.072 7612.479 - 7669.715: 21.2358% ( 638) 00:09:43.072 7669.715 - 7726.952: 26.1105% ( 755) 00:09:43.072 7726.952 - 7784.189: 31.1790% ( 785) 00:09:43.072 7784.189 - 7841.425: 36.3959% ( 808) 00:09:43.072 7841.425 - 7898.662: 41.8970% ( 852) 00:09:43.072 7898.662 - 7955.899: 47.4561% ( 861) 00:09:43.072 7955.899 - 8013.135: 53.0669% ( 869) 00:09:43.072 8013.135 - 8070.372: 58.4775% ( 838) 00:09:43.072 8070.372 - 8127.609: 63.7461% ( 816) 00:09:43.072 8127.609 - 8184.845: 68.7823% ( 780) 00:09:43.072 8184.845 - 8242.082: 73.3988% ( 715) 00:09:43.072 8242.082 - 8299.319: 77.3825% ( 617) 00:09:43.072 8299.319 - 8356.555: 80.9465% ( 552) 00:09:43.072 8356.555 - 8413.792: 83.9037% ( 458) 00:09:43.072 8413.792 - 8471.029: 86.5121% ( 404) 00:09:43.072 8471.029 - 8528.266: 88.7268% ( 343) 00:09:43.072 8528.266 - 8585.502: 90.5798% ( 287) 00:09:43.072 8585.502 - 8642.739: 91.9163% ( 207) 00:09:43.072 8642.739 - 8699.976: 92.8784% ( 149) 00:09:43.072 8699.976 - 8757.212: 93.6209% ( 115) 00:09:43.072 8757.212 - 8814.449: 94.1503% ( 82) 00:09:43.072 8814.449 - 8871.686: 94.5119% ( 56) 00:09:43.072 8871.686 - 8928.922: 94.7766% ( 41) 00:09:43.072 8928.922 - 8986.159: 94.9703% ( 30) 00:09:43.072 8986.159 - 9043.396: 95.1898% ( 34) 00:09:43.072 9043.396 - 9100.632: 95.3706% ( 28) 00:09:43.072 9100.632 - 9157.869: 95.5256% ( 24) 00:09:43.072 9157.869 - 9215.106: 95.6934% ( 26) 00:09:43.072 9215.106 - 9272.342: 95.8678% ( 27) 00:09:43.072 9272.342 - 9329.579: 96.0227% ( 24) 00:09:43.072 9329.579 - 9386.816: 96.1454% ( 19) 00:09:43.072 9386.816 - 9444.052: 96.2423% ( 15) 00:09:43.072 9444.052 - 9501.289: 96.2939% ( 8) 00:09:43.072 9501.289 - 9558.526: 96.3520% ( 9) 00:09:43.072 9558.526 - 9615.762: 96.3972% ( 7) 00:09:43.072 9615.762 - 9672.999: 96.4424% ( 7) 00:09:43.072 9672.999 - 9730.236: 96.5005% ( 9) 00:09:43.072 9730.236 - 9787.472: 96.5328% ( 5) 00:09:43.072 9787.472 - 9844.709: 96.5845% ( 8) 00:09:43.072 9844.709 - 9901.946: 96.6296% ( 7) 00:09:43.072 9901.946 - 9959.183: 96.7007% ( 11) 00:09:43.072 9959.183 - 10016.419: 96.7330% ( 5) 00:09:43.072 10016.419 - 10073.656: 96.7523% ( 3) 00:09:43.072 10073.656 - 10130.893: 96.7717% ( 3) 00:09:43.072 10130.893 - 10188.129: 96.7911% ( 3) 00:09:43.072 10188.129 - 10245.366: 96.8169% ( 4) 00:09:43.072 10245.366 - 10302.603: 96.8556% ( 6) 00:09:43.072 10302.603 - 10359.839: 96.9460% ( 14) 00:09:43.072 10359.839 - 10417.076: 97.0170% ( 11) 00:09:43.072 10417.076 - 10474.313: 97.1010% ( 13) 00:09:43.072 10474.313 - 10531.549: 97.1849% ( 13) 00:09:43.072 10531.549 - 10588.786: 97.2753% ( 14) 00:09:43.072 10588.786 - 10646.023: 97.3657% ( 14) 00:09:43.072 10646.023 - 10703.259: 97.4496% ( 13) 00:09:43.072 10703.259 - 10760.496: 97.5465% ( 15) 00:09:43.072 10760.496 - 10817.733: 97.6498% ( 16) 00:09:43.072 10817.733 - 10874.969: 97.7466% ( 15) 00:09:43.072 10874.969 - 10932.206: 97.8564% ( 17) 00:09:43.072 10932.206 - 10989.443: 97.9597% ( 16) 00:09:43.072 10989.443 - 11046.679: 98.0824% ( 19) 00:09:43.072 11046.679 - 11103.916: 98.1792% ( 15) 00:09:43.072 11103.916 - 11161.153: 98.2825% ( 16) 00:09:43.072 11161.153 - 11218.390: 98.3858% ( 16) 00:09:43.072 11218.390 - 11275.626: 98.4892% ( 16) 00:09:43.072 11275.626 - 11332.863: 98.5860% ( 15) 00:09:43.072 11332.863 - 11390.100: 98.6764% ( 14) 00:09:43.072 11390.100 - 11447.336: 98.7603% ( 13) 00:09:43.072 11447.336 - 11504.573: 98.8378% ( 12) 00:09:43.072 11504.573 - 11561.810: 98.8830% ( 7) 00:09:43.072 11561.810 - 11619.046: 98.9217% ( 6) 00:09:43.072 11619.046 - 11676.283: 98.9540% ( 5) 00:09:43.072 11676.283 - 11733.520: 98.9734% ( 3) 00:09:43.072 11733.520 - 11790.756: 98.9992% ( 4) 00:09:43.072 11790.756 - 11847.993: 99.0186% ( 3) 00:09:43.072 11847.993 - 11905.230: 99.0380% ( 3) 00:09:43.072 11905.230 - 11962.466: 99.0573% ( 3) 00:09:43.072 11962.466 - 12019.703: 99.0832% ( 4) 00:09:43.072 12019.703 - 12076.940: 99.1025% ( 3) 00:09:43.072 12076.940 - 12134.176: 99.1284% ( 4) 00:09:43.072 12134.176 - 12191.413: 99.1542% ( 4) 00:09:43.072 12191.413 - 12248.650: 99.1736% ( 3) 00:09:43.072 30220.968 - 30449.914: 99.1929% ( 3) 00:09:43.072 30449.914 - 30678.861: 99.2833% ( 14) 00:09:43.072 30678.861 - 30907.808: 99.3802% ( 15) 00:09:43.072 30907.808 - 31136.755: 99.4706% ( 14) 00:09:43.072 31136.755 - 31365.701: 99.5610% ( 14) 00:09:43.072 31365.701 - 31594.648: 99.5868% ( 4) 00:09:43.072 35944.636 - 36173.583: 99.6643% ( 12) 00:09:43.072 36173.583 - 36402.529: 99.7482% ( 13) 00:09:43.072 36402.529 - 36631.476: 99.8386% ( 14) 00:09:43.072 36631.476 - 36860.423: 99.9354% ( 15) 00:09:43.072 36860.423 - 37089.369: 100.0000% ( 10) 00:09:43.072 00:09:43.072 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:43.072 ============================================================================== 00:09:43.072 Range in us Cumulative IO count 00:09:43.072 6439.127 - 6467.745: 0.0129% ( 2) 00:09:43.072 6467.745 - 6496.363: 0.0194% ( 1) 00:09:43.072 6496.363 - 6524.982: 0.0323% ( 2) 00:09:43.072 6524.982 - 6553.600: 0.0452% ( 2) 00:09:43.072 6553.600 - 6582.218: 0.0581% ( 2) 00:09:43.072 6582.218 - 6610.837: 0.0710% ( 2) 00:09:43.072 6610.837 - 6639.455: 0.0839% ( 2) 00:09:43.072 6639.455 - 6668.073: 0.0968% ( 2) 00:09:43.072 6668.073 - 6696.692: 0.1098% ( 2) 00:09:43.072 6696.692 - 6725.310: 0.1162% ( 1) 00:09:43.072 6725.310 - 6753.928: 0.1291% ( 2) 00:09:43.072 6753.928 - 6782.547: 0.1420% ( 2) 00:09:43.072 6782.547 - 6811.165: 0.1550% ( 2) 00:09:43.072 6811.165 - 6839.783: 0.1679% ( 2) 00:09:43.072 6839.783 - 6868.402: 0.1808% ( 2) 00:09:43.072 6868.402 - 6897.020: 0.1937% ( 2) 00:09:43.072 6897.020 - 6925.638: 0.2002% ( 1) 00:09:43.072 6925.638 - 6954.257: 0.2131% ( 2) 00:09:43.072 6954.257 - 6982.875: 0.2260% ( 2) 00:09:43.072 6982.875 - 7011.493: 0.2583% ( 5) 00:09:43.072 7011.493 - 7040.112: 0.3099% ( 8) 00:09:43.072 7040.112 - 7068.730: 0.4003% ( 14) 00:09:43.072 7068.730 - 7097.348: 0.5165% ( 18) 00:09:43.072 7097.348 - 7125.967: 0.6521% ( 21) 00:09:43.072 7125.967 - 7154.585: 0.8329% ( 28) 00:09:43.072 7154.585 - 7183.203: 1.0783% ( 38) 00:09:43.072 7183.203 - 7211.822: 1.3882% ( 48) 00:09:43.072 7211.822 - 7240.440: 1.7885% ( 62) 00:09:43.072 7240.440 - 7269.059: 2.3438% ( 86) 00:09:43.072 7269.059 - 7297.677: 2.8990% ( 86) 00:09:43.072 7297.677 - 7326.295: 3.5189% ( 96) 00:09:43.072 7326.295 - 7383.532: 5.2686% ( 271) 00:09:43.072 7383.532 - 7440.769: 7.6898% ( 375) 00:09:43.072 7440.769 - 7498.005: 10.4533% ( 428) 00:09:43.072 7498.005 - 7555.242: 13.7138% ( 505) 00:09:43.072 7555.242 - 7612.479: 17.5297% ( 591) 00:09:43.072 7612.479 - 7669.715: 21.6296% ( 635) 00:09:43.072 7669.715 - 7726.952: 26.1622% ( 702) 00:09:43.072 7726.952 - 7784.189: 31.1596% ( 774) 00:09:43.072 7784.189 - 7841.425: 36.2926% ( 795) 00:09:43.072 7841.425 - 7898.662: 41.6903% ( 836) 00:09:43.072 7898.662 - 7955.899: 47.1849% ( 851) 00:09:43.072 7955.899 - 8013.135: 52.6537% ( 847) 00:09:43.072 8013.135 - 8070.372: 58.0772% ( 840) 00:09:43.072 8070.372 - 8127.609: 63.4039% ( 825) 00:09:43.072 8127.609 - 8184.845: 68.2593% ( 752) 00:09:43.072 8184.845 - 8242.082: 72.8435% ( 710) 00:09:43.072 8242.082 - 8299.319: 76.9693% ( 639) 00:09:43.072 8299.319 - 8356.555: 80.6689% ( 573) 00:09:43.072 8356.555 - 8413.792: 83.6841% ( 467) 00:09:43.072 8413.792 - 8471.029: 86.3314% ( 410) 00:09:43.072 8471.029 - 8528.266: 88.4943% ( 335) 00:09:43.072 8528.266 - 8585.502: 90.2699% ( 275) 00:09:43.072 8585.502 - 8642.739: 91.7291% ( 226) 00:09:43.072 8642.739 - 8699.976: 92.7492% ( 158) 00:09:43.072 8699.976 - 8757.212: 93.5240% ( 120) 00:09:43.072 8757.212 - 8814.449: 94.1439% ( 96) 00:09:43.072 8814.449 - 8871.686: 94.5506% ( 63) 00:09:43.072 8871.686 - 8928.922: 94.8218% ( 42) 00:09:43.072 8928.922 - 8986.159: 95.0865% ( 41) 00:09:43.072 8986.159 - 9043.396: 95.3125% ( 35) 00:09:43.072 9043.396 - 9100.632: 95.4804% ( 26) 00:09:43.072 9100.632 - 9157.869: 95.6353% ( 24) 00:09:43.072 9157.869 - 9215.106: 95.8161% ( 28) 00:09:43.072 9215.106 - 9272.342: 95.9775% ( 25) 00:09:43.072 9272.342 - 9329.579: 96.1325% ( 24) 00:09:43.072 9329.579 - 9386.816: 96.2358% ( 16) 00:09:43.072 9386.816 - 9444.052: 96.3197% ( 13) 00:09:43.072 9444.052 - 9501.289: 96.3972% ( 12) 00:09:43.072 9501.289 - 9558.526: 96.4682% ( 11) 00:09:43.072 9558.526 - 9615.762: 96.5457% ( 12) 00:09:43.072 9615.762 - 9672.999: 96.6103% ( 10) 00:09:43.072 9672.999 - 9730.236: 96.6555% ( 7) 00:09:43.072 9730.236 - 9787.472: 96.6878% ( 5) 00:09:43.072 9787.472 - 9844.709: 96.7330% ( 7) 00:09:43.073 9844.709 - 9901.946: 96.7717% ( 6) 00:09:43.073 9901.946 - 9959.183: 96.8104% ( 6) 00:09:43.073 9959.183 - 10016.419: 96.8492% ( 6) 00:09:43.073 10016.419 - 10073.656: 96.8944% ( 7) 00:09:43.073 10073.656 - 10130.893: 96.9460% ( 8) 00:09:43.073 10130.893 - 10188.129: 96.9977% ( 8) 00:09:43.073 10188.129 - 10245.366: 97.0816% ( 13) 00:09:43.073 10245.366 - 10302.603: 97.1591% ( 12) 00:09:43.073 10302.603 - 10359.839: 97.2237% ( 10) 00:09:43.073 10359.839 - 10417.076: 97.2882% ( 10) 00:09:43.073 10417.076 - 10474.313: 97.3657% ( 12) 00:09:43.073 10474.313 - 10531.549: 97.4561% ( 14) 00:09:43.073 10531.549 - 10588.786: 97.5594% ( 16) 00:09:43.073 10588.786 - 10646.023: 97.6369% ( 12) 00:09:43.073 10646.023 - 10703.259: 97.7273% ( 14) 00:09:43.073 10703.259 - 10760.496: 97.8564% ( 20) 00:09:43.073 10760.496 - 10817.733: 97.9726% ( 18) 00:09:43.073 10817.733 - 10874.969: 98.0759% ( 16) 00:09:43.073 10874.969 - 10932.206: 98.1792% ( 16) 00:09:43.073 10932.206 - 10989.443: 98.2890% ( 17) 00:09:43.073 10989.443 - 11046.679: 98.4052% ( 18) 00:09:43.073 11046.679 - 11103.916: 98.5085% ( 16) 00:09:43.073 11103.916 - 11161.153: 98.6183% ( 17) 00:09:43.073 11161.153 - 11218.390: 98.7280% ( 17) 00:09:43.073 11218.390 - 11275.626: 98.8378% ( 17) 00:09:43.073 11275.626 - 11332.863: 98.9282% ( 14) 00:09:43.073 11332.863 - 11390.100: 98.9992% ( 11) 00:09:43.073 11390.100 - 11447.336: 99.0444% ( 7) 00:09:43.073 11447.336 - 11504.573: 99.0896% ( 7) 00:09:43.073 11504.573 - 11561.810: 99.1348% ( 7) 00:09:43.073 11561.810 - 11619.046: 99.1671% ( 5) 00:09:43.073 11619.046 - 11676.283: 99.1736% ( 1) 00:09:43.073 30678.861 - 30907.808: 99.2575% ( 13) 00:09:43.073 30907.808 - 31136.755: 99.3479% ( 14) 00:09:43.073 31136.755 - 31365.701: 99.4318% ( 13) 00:09:43.073 31365.701 - 31594.648: 99.5222% ( 14) 00:09:43.073 31594.648 - 31823.595: 99.5868% ( 10) 00:09:43.073 35944.636 - 36173.583: 99.6707% ( 13) 00:09:43.073 36173.583 - 36402.529: 99.7546% ( 13) 00:09:43.073 36402.529 - 36631.476: 99.8450% ( 14) 00:09:43.073 36631.476 - 36860.423: 99.9354% ( 14) 00:09:43.073 36860.423 - 37089.369: 100.0000% ( 10) 00:09:43.073 00:09:43.073 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:43.073 ============================================================================== 00:09:43.073 Range in us Cumulative IO count 00:09:43.073 6067.088 - 6095.707: 0.0129% ( 2) 00:09:43.073 6095.707 - 6124.325: 0.0194% ( 1) 00:09:43.073 6124.325 - 6152.943: 0.0323% ( 2) 00:09:43.073 6152.943 - 6181.562: 0.0452% ( 2) 00:09:43.073 6181.562 - 6210.180: 0.0581% ( 2) 00:09:43.073 6210.180 - 6238.798: 0.0646% ( 1) 00:09:43.073 6238.798 - 6267.417: 0.0775% ( 2) 00:09:43.073 6267.417 - 6296.035: 0.0968% ( 3) 00:09:43.073 6296.035 - 6324.653: 0.1098% ( 2) 00:09:43.073 6324.653 - 6353.272: 0.1162% ( 1) 00:09:43.073 6353.272 - 6381.890: 0.1291% ( 2) 00:09:43.073 6381.890 - 6410.508: 0.1420% ( 2) 00:09:43.073 6410.508 - 6439.127: 0.1550% ( 2) 00:09:43.073 6439.127 - 6467.745: 0.1614% ( 1) 00:09:43.073 6467.745 - 6496.363: 0.1743% ( 2) 00:09:43.073 6496.363 - 6524.982: 0.1872% ( 2) 00:09:43.073 6524.982 - 6553.600: 0.2002% ( 2) 00:09:43.073 6553.600 - 6582.218: 0.2066% ( 1) 00:09:43.073 6582.218 - 6610.837: 0.2195% ( 2) 00:09:43.073 6610.837 - 6639.455: 0.2324% ( 2) 00:09:43.073 6639.455 - 6668.073: 0.2454% ( 2) 00:09:43.073 6668.073 - 6696.692: 0.2583% ( 2) 00:09:43.073 6696.692 - 6725.310: 0.2712% ( 2) 00:09:43.073 6725.310 - 6753.928: 0.2776% ( 1) 00:09:43.073 6753.928 - 6782.547: 0.2905% ( 2) 00:09:43.073 6782.547 - 6811.165: 0.3035% ( 2) 00:09:43.073 6811.165 - 6839.783: 0.3164% ( 2) 00:09:43.073 6839.783 - 6868.402: 0.3293% ( 2) 00:09:43.073 6868.402 - 6897.020: 0.3422% ( 2) 00:09:43.073 6897.020 - 6925.638: 0.3551% ( 2) 00:09:43.073 6925.638 - 6954.257: 0.3616% ( 1) 00:09:43.073 6954.257 - 6982.875: 0.4068% ( 7) 00:09:43.073 6982.875 - 7011.493: 0.4455% ( 6) 00:09:43.073 7011.493 - 7040.112: 0.5230% ( 12) 00:09:43.073 7040.112 - 7068.730: 0.6198% ( 15) 00:09:43.073 7068.730 - 7097.348: 0.6909% ( 11) 00:09:43.073 7097.348 - 7125.967: 0.8006% ( 17) 00:09:43.073 7125.967 - 7154.585: 0.9685% ( 26) 00:09:43.073 7154.585 - 7183.203: 1.1622% ( 30) 00:09:43.073 7183.203 - 7211.822: 1.4398% ( 43) 00:09:43.073 7211.822 - 7240.440: 1.8143% ( 58) 00:09:43.073 7240.440 - 7269.059: 2.2082% ( 61) 00:09:43.073 7269.059 - 7297.677: 2.8022% ( 92) 00:09:43.073 7297.677 - 7326.295: 3.5059% ( 109) 00:09:43.073 7326.295 - 7383.532: 5.3525% ( 286) 00:09:43.073 7383.532 - 7440.769: 7.5026% ( 333) 00:09:43.073 7440.769 - 7498.005: 10.4274% ( 453) 00:09:43.073 7498.005 - 7555.242: 13.6041% ( 492) 00:09:43.073 7555.242 - 7612.479: 17.2521% ( 565) 00:09:43.073 7612.479 - 7669.715: 21.4489% ( 650) 00:09:43.073 7669.715 - 7726.952: 26.1816% ( 733) 00:09:43.073 7726.952 - 7784.189: 31.1402% ( 768) 00:09:43.073 7784.189 - 7841.425: 36.4282% ( 819) 00:09:43.073 7841.425 - 7898.662: 41.8905% ( 846) 00:09:43.073 7898.662 - 7955.899: 47.4367% ( 859) 00:09:43.073 7955.899 - 8013.135: 52.8796% ( 843) 00:09:43.073 8013.135 - 8070.372: 58.3678% ( 850) 00:09:43.073 8070.372 - 8127.609: 63.4879% ( 793) 00:09:43.073 8127.609 - 8184.845: 68.3626% ( 755) 00:09:43.073 8184.845 - 8242.082: 72.9403% ( 709) 00:09:43.073 8242.082 - 8299.319: 77.0338% ( 634) 00:09:43.073 8299.319 - 8356.555: 80.6754% ( 564) 00:09:43.073 8356.555 - 8413.792: 83.6519% ( 461) 00:09:43.073 8413.792 - 8471.029: 86.2862% ( 408) 00:09:43.073 8471.029 - 8528.266: 88.5331% ( 348) 00:09:43.073 8528.266 - 8585.502: 90.3151% ( 276) 00:09:43.073 8585.502 - 8642.739: 91.8453% ( 237) 00:09:43.073 8642.739 - 8699.976: 92.9236% ( 167) 00:09:43.073 8699.976 - 8757.212: 93.7435% ( 127) 00:09:43.073 8757.212 - 8814.449: 94.2730% ( 82) 00:09:43.073 8814.449 - 8871.686: 94.6733% ( 62) 00:09:43.073 8871.686 - 8928.922: 94.9509% ( 43) 00:09:43.073 8928.922 - 8986.159: 95.1640% ( 33) 00:09:43.073 8986.159 - 9043.396: 95.3706% ( 32) 00:09:43.073 9043.396 - 9100.632: 95.5643% ( 30) 00:09:43.073 9100.632 - 9157.869: 95.7451% ( 28) 00:09:43.073 9157.869 - 9215.106: 95.9388% ( 30) 00:09:43.073 9215.106 - 9272.342: 96.1067% ( 26) 00:09:43.073 9272.342 - 9329.579: 96.2293% ( 19) 00:09:43.073 9329.579 - 9386.816: 96.3391% ( 17) 00:09:43.073 9386.816 - 9444.052: 96.4101% ( 11) 00:09:43.073 9444.052 - 9501.289: 96.5005% ( 14) 00:09:43.073 9501.289 - 9558.526: 96.5651% ( 10) 00:09:43.073 9558.526 - 9615.762: 96.6103% ( 7) 00:09:43.073 9615.762 - 9672.999: 96.6426% ( 5) 00:09:43.073 9672.999 - 9730.236: 96.6813% ( 6) 00:09:43.073 9730.236 - 9787.472: 96.7330% ( 8) 00:09:43.073 9787.472 - 9844.709: 96.7846% ( 8) 00:09:43.073 9844.709 - 9901.946: 96.8233% ( 6) 00:09:43.073 9901.946 - 9959.183: 96.8621% ( 6) 00:09:43.073 9959.183 - 10016.419: 96.9073% ( 7) 00:09:43.073 10016.419 - 10073.656: 96.9525% ( 7) 00:09:43.073 10073.656 - 10130.893: 96.9848% ( 5) 00:09:43.073 10130.893 - 10188.129: 97.0300% ( 7) 00:09:43.073 10188.129 - 10245.366: 97.0945% ( 10) 00:09:43.073 10245.366 - 10302.603: 97.1526% ( 9) 00:09:43.073 10302.603 - 10359.839: 97.2172% ( 10) 00:09:43.073 10359.839 - 10417.076: 97.2818% ( 10) 00:09:43.073 10417.076 - 10474.313: 97.3592% ( 12) 00:09:43.073 10474.313 - 10531.549: 97.4109% ( 8) 00:09:43.073 10531.549 - 10588.786: 97.4690% ( 9) 00:09:43.073 10588.786 - 10646.023: 97.5723% ( 16) 00:09:43.073 10646.023 - 10703.259: 97.6885% ( 18) 00:09:43.073 10703.259 - 10760.496: 97.7725% ( 13) 00:09:43.073 10760.496 - 10817.733: 97.8629% ( 14) 00:09:43.073 10817.733 - 10874.969: 97.9791% ( 18) 00:09:43.073 10874.969 - 10932.206: 98.0759% ( 15) 00:09:43.073 10932.206 - 10989.443: 98.1921% ( 18) 00:09:43.073 10989.443 - 11046.679: 98.2955% ( 16) 00:09:43.073 11046.679 - 11103.916: 98.4117% ( 18) 00:09:43.073 11103.916 - 11161.153: 98.5214% ( 17) 00:09:43.073 11161.153 - 11218.390: 98.6312% ( 17) 00:09:43.073 11218.390 - 11275.626: 98.7345% ( 16) 00:09:43.073 11275.626 - 11332.863: 98.8378% ( 16) 00:09:43.073 11332.863 - 11390.100: 98.9088% ( 11) 00:09:43.073 11390.100 - 11447.336: 98.9799% ( 11) 00:09:43.073 11447.336 - 11504.573: 99.0509% ( 11) 00:09:43.073 11504.573 - 11561.810: 99.0961% ( 7) 00:09:43.073 11561.810 - 11619.046: 99.1413% ( 7) 00:09:43.073 11619.046 - 11676.283: 99.1736% ( 5) 00:09:43.073 30220.968 - 30449.914: 99.2317% ( 9) 00:09:43.073 30449.914 - 30678.861: 99.3221% ( 14) 00:09:43.073 30678.861 - 30907.808: 99.4124% ( 14) 00:09:43.073 30907.808 - 31136.755: 99.5093% ( 15) 00:09:43.073 31136.755 - 31365.701: 99.5868% ( 12) 00:09:43.073 35486.742 - 35715.689: 99.6772% ( 14) 00:09:43.073 35715.689 - 35944.636: 99.7676% ( 14) 00:09:43.073 35944.636 - 36173.583: 99.8515% ( 13) 00:09:43.073 36173.583 - 36402.529: 99.9419% ( 14) 00:09:43.073 36402.529 - 36631.476: 100.0000% ( 9) 00:09:43.073 00:09:43.074 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:43.074 ============================================================================== 00:09:43.074 Range in us Cumulative IO count 00:09:43.074 5637.813 - 5666.431: 0.0065% ( 1) 00:09:43.074 5666.431 - 5695.050: 0.0129% ( 1) 00:09:43.074 5695.050 - 5723.668: 0.0258% ( 2) 00:09:43.074 5723.668 - 5752.286: 0.0387% ( 2) 00:09:43.074 5752.286 - 5780.905: 0.0517% ( 2) 00:09:43.074 5780.905 - 5809.523: 0.0646% ( 2) 00:09:43.074 5809.523 - 5838.141: 0.0710% ( 1) 00:09:43.074 5838.141 - 5866.760: 0.0839% ( 2) 00:09:43.074 5866.760 - 5895.378: 0.1033% ( 3) 00:09:43.074 5895.378 - 5923.997: 0.1162% ( 2) 00:09:43.074 5923.997 - 5952.615: 0.1291% ( 2) 00:09:43.074 5952.615 - 5981.233: 0.1420% ( 2) 00:09:43.074 5981.233 - 6009.852: 0.1485% ( 1) 00:09:43.074 6009.852 - 6038.470: 0.1614% ( 2) 00:09:43.074 6038.470 - 6067.088: 0.1743% ( 2) 00:09:43.074 6067.088 - 6095.707: 0.1872% ( 2) 00:09:43.074 6095.707 - 6124.325: 0.2002% ( 2) 00:09:43.074 6124.325 - 6152.943: 0.2131% ( 2) 00:09:43.074 6152.943 - 6181.562: 0.2195% ( 1) 00:09:43.074 6181.562 - 6210.180: 0.2324% ( 2) 00:09:43.074 6210.180 - 6238.798: 0.2454% ( 2) 00:09:43.074 6238.798 - 6267.417: 0.2583% ( 2) 00:09:43.074 6267.417 - 6296.035: 0.2712% ( 2) 00:09:43.074 6296.035 - 6324.653: 0.2776% ( 1) 00:09:43.074 6324.653 - 6353.272: 0.2905% ( 2) 00:09:43.074 6353.272 - 6381.890: 0.3035% ( 2) 00:09:43.074 6381.890 - 6410.508: 0.3164% ( 2) 00:09:43.074 6410.508 - 6439.127: 0.3293% ( 2) 00:09:43.074 6439.127 - 6467.745: 0.3422% ( 2) 00:09:43.074 6467.745 - 6496.363: 0.3551% ( 2) 00:09:43.074 6496.363 - 6524.982: 0.3616% ( 1) 00:09:43.074 6524.982 - 6553.600: 0.3745% ( 2) 00:09:43.074 6553.600 - 6582.218: 0.3874% ( 2) 00:09:43.074 6582.218 - 6610.837: 0.3939% ( 1) 00:09:43.074 6610.837 - 6639.455: 0.4068% ( 2) 00:09:43.074 6639.455 - 6668.073: 0.4132% ( 1) 00:09:43.074 6982.875 - 7011.493: 0.4649% ( 8) 00:09:43.074 7011.493 - 7040.112: 0.4842% ( 3) 00:09:43.074 7040.112 - 7068.730: 0.5488% ( 10) 00:09:43.074 7068.730 - 7097.348: 0.6198% ( 11) 00:09:43.074 7097.348 - 7125.967: 0.7490% ( 20) 00:09:43.074 7125.967 - 7154.585: 0.9104% ( 25) 00:09:43.074 7154.585 - 7183.203: 1.1041% ( 30) 00:09:43.074 7183.203 - 7211.822: 1.4205% ( 49) 00:09:43.074 7211.822 - 7240.440: 1.8014% ( 59) 00:09:43.074 7240.440 - 7269.059: 2.1823% ( 59) 00:09:43.074 7269.059 - 7297.677: 2.7118% ( 82) 00:09:43.074 7297.677 - 7326.295: 3.4155% ( 109) 00:09:43.074 7326.295 - 7383.532: 5.1847% ( 274) 00:09:43.074 7383.532 - 7440.769: 7.5284% ( 363) 00:09:43.074 7440.769 - 7498.005: 10.2596% ( 423) 00:09:43.074 7498.005 - 7555.242: 13.4104% ( 488) 00:09:43.074 7555.242 - 7612.479: 17.0842% ( 569) 00:09:43.074 7612.479 - 7669.715: 21.2164% ( 640) 00:09:43.074 7669.715 - 7726.952: 25.8910% ( 724) 00:09:43.074 7726.952 - 7784.189: 30.8949% ( 775) 00:09:43.074 7784.189 - 7841.425: 36.2474% ( 829) 00:09:43.074 7841.425 - 7898.662: 41.7420% ( 851) 00:09:43.074 7898.662 - 7955.899: 47.2430% ( 852) 00:09:43.074 7955.899 - 8013.135: 52.8538% ( 869) 00:09:43.074 8013.135 - 8070.372: 58.3419% ( 850) 00:09:43.074 8070.372 - 8127.609: 63.6041% ( 815) 00:09:43.074 8127.609 - 8184.845: 68.6273% ( 778) 00:09:43.074 8184.845 - 8242.082: 73.2761% ( 720) 00:09:43.074 8242.082 - 8299.319: 77.3889% ( 637) 00:09:43.074 8299.319 - 8356.555: 80.9853% ( 557) 00:09:43.074 8356.555 - 8413.792: 84.1103% ( 484) 00:09:43.074 8413.792 - 8471.029: 86.7446% ( 408) 00:09:43.074 8471.029 - 8528.266: 89.0044% ( 350) 00:09:43.074 8528.266 - 8585.502: 90.8252% ( 282) 00:09:43.074 8585.502 - 8642.739: 92.1746% ( 209) 00:09:43.074 8642.739 - 8699.976: 93.1495% ( 151) 00:09:43.074 8699.976 - 8757.212: 94.0018% ( 132) 00:09:43.074 8757.212 - 8814.449: 94.5764% ( 89) 00:09:43.074 8814.449 - 8871.686: 94.9509% ( 58) 00:09:43.074 8871.686 - 8928.922: 95.2286% ( 43) 00:09:43.074 8928.922 - 8986.159: 95.4675% ( 37) 00:09:43.074 8986.159 - 9043.396: 95.6353% ( 26) 00:09:43.074 9043.396 - 9100.632: 95.7838% ( 23) 00:09:43.074 9100.632 - 9157.869: 95.9711% ( 29) 00:09:43.074 9157.869 - 9215.106: 96.1325% ( 25) 00:09:43.074 9215.106 - 9272.342: 96.2810% ( 23) 00:09:43.074 9272.342 - 9329.579: 96.4037% ( 19) 00:09:43.074 9329.579 - 9386.816: 96.5070% ( 16) 00:09:43.074 9386.816 - 9444.052: 96.5845% ( 12) 00:09:43.074 9444.052 - 9501.289: 96.6167% ( 5) 00:09:43.074 9501.289 - 9558.526: 96.6619% ( 7) 00:09:43.074 9558.526 - 9615.762: 96.7071% ( 7) 00:09:43.074 9615.762 - 9672.999: 96.7459% ( 6) 00:09:43.074 9672.999 - 9730.236: 96.7846% ( 6) 00:09:43.074 9730.236 - 9787.472: 96.8040% ( 3) 00:09:43.074 9787.472 - 9844.709: 96.8298% ( 4) 00:09:43.074 9844.709 - 9901.946: 96.8492% ( 3) 00:09:43.074 9901.946 - 9959.183: 96.8750% ( 4) 00:09:43.074 9959.183 - 10016.419: 96.8944% ( 3) 00:09:43.074 10016.419 - 10073.656: 96.9137% ( 3) 00:09:43.074 10073.656 - 10130.893: 96.9396% ( 4) 00:09:43.074 10130.893 - 10188.129: 96.9783% ( 6) 00:09:43.074 10188.129 - 10245.366: 97.0235% ( 7) 00:09:43.074 10245.366 - 10302.603: 97.0752% ( 8) 00:09:43.074 10302.603 - 10359.839: 97.1333% ( 9) 00:09:43.074 10359.839 - 10417.076: 97.1978% ( 10) 00:09:43.074 10417.076 - 10474.313: 97.2624% ( 10) 00:09:43.074 10474.313 - 10531.549: 97.3399% ( 12) 00:09:43.074 10531.549 - 10588.786: 97.3980% ( 9) 00:09:43.074 10588.786 - 10646.023: 97.4561% ( 9) 00:09:43.074 10646.023 - 10703.259: 97.5271% ( 11) 00:09:43.074 10703.259 - 10760.496: 97.6240% ( 15) 00:09:43.074 10760.496 - 10817.733: 97.7144% ( 14) 00:09:43.074 10817.733 - 10874.969: 97.7918% ( 12) 00:09:43.074 10874.969 - 10932.206: 97.8822% ( 14) 00:09:43.074 10932.206 - 10989.443: 98.0178% ( 21) 00:09:43.074 10989.443 - 11046.679: 98.1211% ( 16) 00:09:43.074 11046.679 - 11103.916: 98.2309% ( 17) 00:09:43.074 11103.916 - 11161.153: 98.3407% ( 17) 00:09:43.074 11161.153 - 11218.390: 98.4504% ( 17) 00:09:43.074 11218.390 - 11275.626: 98.5537% ( 16) 00:09:43.074 11275.626 - 11332.863: 98.6635% ( 17) 00:09:43.074 11332.863 - 11390.100: 98.7539% ( 14) 00:09:43.074 11390.100 - 11447.336: 98.8507% ( 15) 00:09:43.074 11447.336 - 11504.573: 98.9217% ( 11) 00:09:43.074 11504.573 - 11561.810: 98.9799% ( 9) 00:09:43.074 11561.810 - 11619.046: 99.0380% ( 9) 00:09:43.074 11619.046 - 11676.283: 99.0832% ( 7) 00:09:43.074 11676.283 - 11733.520: 99.1154% ( 5) 00:09:43.074 11733.520 - 11790.756: 99.1413% ( 4) 00:09:43.074 11790.756 - 11847.993: 99.1671% ( 4) 00:09:43.074 11847.993 - 11905.230: 99.1736% ( 1) 00:09:43.074 29763.074 - 29992.021: 99.2123% ( 6) 00:09:43.074 29992.021 - 30220.968: 99.2962% ( 13) 00:09:43.074 30220.968 - 30449.914: 99.3931% ( 15) 00:09:43.074 30449.914 - 30678.861: 99.4770% ( 13) 00:09:43.074 30678.861 - 30907.808: 99.5674% ( 14) 00:09:43.074 30907.808 - 31136.755: 99.5868% ( 3) 00:09:43.074 34799.902 - 35028.849: 99.6255% ( 6) 00:09:43.074 35028.849 - 35257.796: 99.7159% ( 14) 00:09:43.074 35257.796 - 35486.742: 99.8063% ( 14) 00:09:43.074 35486.742 - 35715.689: 99.8967% ( 14) 00:09:43.074 35715.689 - 35944.636: 99.9871% ( 14) 00:09:43.074 35944.636 - 36173.583: 100.0000% ( 2) 00:09:43.074 00:09:43.074 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:43.074 ============================================================================== 00:09:43.074 Range in us Cumulative IO count 00:09:43.074 5294.393 - 5323.011: 0.0129% ( 2) 00:09:43.074 5323.011 - 5351.630: 0.0257% ( 2) 00:09:43.074 5351.630 - 5380.248: 0.0386% ( 2) 00:09:43.074 5380.248 - 5408.866: 0.0450% ( 1) 00:09:43.074 5408.866 - 5437.485: 0.0579% ( 2) 00:09:43.074 5437.485 - 5466.103: 0.0707% ( 2) 00:09:43.074 5466.103 - 5494.721: 0.0836% ( 2) 00:09:43.074 5494.721 - 5523.340: 0.0965% ( 2) 00:09:43.074 5523.340 - 5551.958: 0.1093% ( 2) 00:09:43.074 5551.958 - 5580.576: 0.1222% ( 2) 00:09:43.074 5580.576 - 5609.195: 0.1286% ( 1) 00:09:43.074 5609.195 - 5637.813: 0.1415% ( 2) 00:09:43.074 5637.813 - 5666.431: 0.1543% ( 2) 00:09:43.074 5666.431 - 5695.050: 0.1672% ( 2) 00:09:43.074 5695.050 - 5723.668: 0.1800% ( 2) 00:09:43.074 5723.668 - 5752.286: 0.1929% ( 2) 00:09:43.074 5752.286 - 5780.905: 0.2058% ( 2) 00:09:43.074 5780.905 - 5809.523: 0.2122% ( 1) 00:09:43.074 5809.523 - 5838.141: 0.2251% ( 2) 00:09:43.074 5838.141 - 5866.760: 0.2379% ( 2) 00:09:43.074 5866.760 - 5895.378: 0.2508% ( 2) 00:09:43.074 5895.378 - 5923.997: 0.2636% ( 2) 00:09:43.074 5923.997 - 5952.615: 0.2701% ( 1) 00:09:43.074 5952.615 - 5981.233: 0.2829% ( 2) 00:09:43.074 5981.233 - 6009.852: 0.2958% ( 2) 00:09:43.074 6009.852 - 6038.470: 0.3086% ( 2) 00:09:43.074 6038.470 - 6067.088: 0.3215% ( 2) 00:09:43.074 6067.088 - 6095.707: 0.3344% ( 2) 00:09:43.074 6095.707 - 6124.325: 0.3408% ( 1) 00:09:43.074 6124.325 - 6152.943: 0.3537% ( 2) 00:09:43.074 6152.943 - 6181.562: 0.3665% ( 2) 00:09:43.074 6181.562 - 6210.180: 0.3729% ( 1) 00:09:43.074 6210.180 - 6238.798: 0.3858% ( 2) 00:09:43.075 6238.798 - 6267.417: 0.3987% ( 2) 00:09:43.075 6267.417 - 6296.035: 0.4115% ( 2) 00:09:43.075 6925.638 - 6954.257: 0.4180% ( 1) 00:09:43.075 6954.257 - 6982.875: 0.4437% ( 4) 00:09:43.075 6982.875 - 7011.493: 0.4887% ( 7) 00:09:43.075 7011.493 - 7040.112: 0.5466% ( 9) 00:09:43.075 7040.112 - 7068.730: 0.6559% ( 17) 00:09:43.075 7068.730 - 7097.348: 0.7459% ( 14) 00:09:43.075 7097.348 - 7125.967: 0.8423% ( 15) 00:09:43.075 7125.967 - 7154.585: 0.9259% ( 13) 00:09:43.075 7154.585 - 7183.203: 1.0867% ( 25) 00:09:43.075 7183.203 - 7211.822: 1.3760% ( 45) 00:09:43.075 7211.822 - 7240.440: 1.8326% ( 71) 00:09:43.075 7240.440 - 7269.059: 2.3020% ( 73) 00:09:43.075 7269.059 - 7297.677: 2.8742% ( 89) 00:09:43.075 7297.677 - 7326.295: 3.4529% ( 90) 00:09:43.075 7326.295 - 7383.532: 5.1505% ( 264) 00:09:43.075 7383.532 - 7440.769: 7.5553% ( 374) 00:09:43.075 7440.769 - 7498.005: 10.2366% ( 417) 00:09:43.075 7498.005 - 7555.242: 13.4452% ( 499) 00:09:43.075 7555.242 - 7612.479: 17.1939% ( 583) 00:09:43.075 7612.479 - 7669.715: 21.3413% ( 645) 00:09:43.075 7669.715 - 7726.952: 26.1253% ( 744) 00:09:43.075 7726.952 - 7784.189: 31.0764% ( 770) 00:09:43.075 7784.189 - 7841.425: 36.2076% ( 798) 00:09:43.075 7841.425 - 7898.662: 41.5316% ( 828) 00:09:43.075 7898.662 - 7955.899: 47.1193% ( 869) 00:09:43.075 7955.899 - 8013.135: 52.6620% ( 862) 00:09:43.075 8013.135 - 8070.372: 58.1919% ( 860) 00:09:43.075 8070.372 - 8127.609: 63.5224% ( 829) 00:09:43.075 8127.609 - 8184.845: 68.4671% ( 769) 00:09:43.075 8184.845 - 8242.082: 72.9745% ( 701) 00:09:43.075 8242.082 - 8299.319: 77.0448% ( 633) 00:09:43.075 8299.319 - 8356.555: 80.6327% ( 558) 00:09:43.075 8356.555 - 8413.792: 83.7320% ( 482) 00:09:43.075 8413.792 - 8471.029: 86.4712% ( 426) 00:09:43.075 8471.029 - 8528.266: 88.7924% ( 361) 00:09:43.075 8528.266 - 8585.502: 90.5993% ( 281) 00:09:43.075 8585.502 - 8642.739: 91.9174% ( 205) 00:09:43.075 8642.739 - 8699.976: 92.8884% ( 151) 00:09:43.075 8699.976 - 8757.212: 93.6085% ( 112) 00:09:43.075 8757.212 - 8814.449: 94.2065% ( 93) 00:09:43.075 8814.449 - 8871.686: 94.5345% ( 51) 00:09:43.075 8871.686 - 8928.922: 94.8302% ( 46) 00:09:43.075 8928.922 - 8986.159: 95.0553% ( 35) 00:09:43.075 8986.159 - 9043.396: 95.2868% ( 36) 00:09:43.075 9043.396 - 9100.632: 95.4733% ( 29) 00:09:43.075 9100.632 - 9157.869: 95.6469% ( 27) 00:09:43.075 9157.869 - 9215.106: 95.7883% ( 22) 00:09:43.075 9215.106 - 9272.342: 95.9234% ( 21) 00:09:43.075 9272.342 - 9329.579: 96.0520% ( 20) 00:09:43.075 9329.579 - 9386.816: 96.1741% ( 19) 00:09:43.075 9386.816 - 9444.052: 96.2641% ( 14) 00:09:43.075 9444.052 - 9501.289: 96.3156% ( 8) 00:09:43.075 9501.289 - 9558.526: 96.3670% ( 8) 00:09:43.075 9558.526 - 9615.762: 96.4185% ( 8) 00:09:43.075 9615.762 - 9672.999: 96.4442% ( 4) 00:09:43.075 9672.999 - 9730.236: 96.4956% ( 8) 00:09:43.075 9730.236 - 9787.472: 96.5471% ( 8) 00:09:43.075 9787.472 - 9844.709: 96.5985% ( 8) 00:09:43.075 9844.709 - 9901.946: 96.6371% ( 6) 00:09:43.075 9901.946 - 9959.183: 96.6821% ( 7) 00:09:43.075 9959.183 - 10016.419: 96.7207% ( 6) 00:09:43.075 10016.419 - 10073.656: 96.7657% ( 7) 00:09:43.075 10073.656 - 10130.893: 96.8107% ( 7) 00:09:43.075 10130.893 - 10188.129: 96.8621% ( 8) 00:09:43.075 10188.129 - 10245.366: 96.9072% ( 7) 00:09:43.075 10245.366 - 10302.603: 96.9779% ( 11) 00:09:43.075 10302.603 - 10359.839: 97.0550% ( 12) 00:09:43.075 10359.839 - 10417.076: 97.1065% ( 8) 00:09:43.075 10417.076 - 10474.313: 97.1901% ( 13) 00:09:43.075 10474.313 - 10531.549: 97.2672% ( 12) 00:09:43.075 10531.549 - 10588.786: 97.3444% ( 12) 00:09:43.075 10588.786 - 10646.023: 97.4344% ( 14) 00:09:43.075 10646.023 - 10703.259: 97.5051% ( 11) 00:09:43.075 10703.259 - 10760.496: 97.6016% ( 15) 00:09:43.075 10760.496 - 10817.733: 97.6980% ( 15) 00:09:43.075 10817.733 - 10874.969: 97.8009% ( 16) 00:09:43.075 10874.969 - 10932.206: 97.8781% ( 12) 00:09:43.075 10932.206 - 10989.443: 97.9617% ( 13) 00:09:43.075 10989.443 - 11046.679: 98.0710% ( 17) 00:09:43.075 11046.679 - 11103.916: 98.1739% ( 16) 00:09:43.075 11103.916 - 11161.153: 98.2832% ( 17) 00:09:43.075 11161.153 - 11218.390: 98.3925% ( 17) 00:09:43.075 11218.390 - 11275.626: 98.5018% ( 17) 00:09:43.075 11275.626 - 11332.863: 98.6111% ( 17) 00:09:43.075 11332.863 - 11390.100: 98.7269% ( 18) 00:09:43.075 11390.100 - 11447.336: 98.8169% ( 14) 00:09:43.075 11447.336 - 11504.573: 98.8876% ( 11) 00:09:43.075 11504.573 - 11561.810: 98.9583% ( 11) 00:09:43.075 11561.810 - 11619.046: 99.0226% ( 10) 00:09:43.075 11619.046 - 11676.283: 99.0741% ( 8) 00:09:43.075 11676.283 - 11733.520: 99.0998% ( 4) 00:09:43.075 11733.520 - 11790.756: 99.1255% ( 4) 00:09:43.075 11790.756 - 11847.993: 99.1512% ( 4) 00:09:43.075 11847.993 - 11905.230: 99.1770% ( 4) 00:09:43.075 24039.406 - 24153.879: 99.2091% ( 5) 00:09:43.075 24153.879 - 24268.353: 99.2541% ( 7) 00:09:43.075 24268.353 - 24382.826: 99.2991% ( 7) 00:09:43.075 24382.826 - 24497.300: 99.3506% ( 8) 00:09:43.075 24497.300 - 24611.773: 99.3956% ( 7) 00:09:43.075 24611.773 - 24726.246: 99.4406% ( 7) 00:09:43.075 24726.246 - 24840.720: 99.4856% ( 7) 00:09:43.075 24840.720 - 24955.193: 99.5370% ( 8) 00:09:43.075 24955.193 - 25069.666: 99.5820% ( 7) 00:09:43.075 25069.666 - 25184.140: 99.5885% ( 1) 00:09:43.075 29763.074 - 29992.021: 99.5949% ( 1) 00:09:43.075 29992.021 - 30220.968: 99.6849% ( 14) 00:09:43.075 30220.968 - 30449.914: 99.7685% ( 13) 00:09:43.075 30449.914 - 30678.861: 99.8585% ( 14) 00:09:43.075 30678.861 - 30907.808: 99.9486% ( 14) 00:09:43.075 30907.808 - 31136.755: 100.0000% ( 8) 00:09:43.075 00:09:43.075 08:31:04 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:44.458 Initializing NVMe Controllers 00:09:44.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:44.458 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:44.459 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:44.459 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:44.459 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:44.459 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:44.459 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:44.459 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:44.459 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:44.459 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:44.459 Initialization complete. Launching workers. 00:09:44.459 ======================================================== 00:09:44.459 Latency(us) 00:09:44.459 Device Information : IOPS MiB/s Average min max 00:09:44.459 PCIE (0000:00:10.0) NSID 1 from core 0: 9430.71 110.52 13586.04 8513.11 38115.57 00:09:44.459 PCIE (0000:00:11.0) NSID 1 from core 0: 9430.71 110.52 13577.93 8343.29 38540.51 00:09:44.459 PCIE (0000:00:13.0) NSID 1 from core 0: 9430.71 110.52 13568.56 6842.24 39261.57 00:09:44.459 PCIE (0000:00:12.0) NSID 1 from core 0: 9430.71 110.52 13558.72 6534.82 39457.66 00:09:44.459 PCIE (0000:00:12.0) NSID 2 from core 0: 9430.71 110.52 13549.10 6076.69 39541.94 00:09:44.459 PCIE (0000:00:12.0) NSID 3 from core 0: 9494.43 111.26 13448.72 5942.94 30200.79 00:09:44.459 ======================================================== 00:09:44.459 Total : 56648.00 663.84 13548.07 5942.94 39541.94 00:09:44.459 00:09:44.459 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:44.459 ================================================================================= 00:09:44.459 1.00000% : 9844.709us 00:09:44.459 10.00000% : 10646.023us 00:09:44.459 25.00000% : 11218.390us 00:09:44.459 50.00000% : 12935.490us 00:09:44.459 75.00000% : 15453.904us 00:09:44.459 90.00000% : 17056.531us 00:09:44.459 95.00000% : 17972.318us 00:09:44.459 98.00000% : 19117.052us 00:09:44.459 99.00000% : 25642.033us 00:09:44.459 99.50000% : 36402.529us 00:09:44.459 99.90000% : 37776.210us 00:09:44.459 99.99000% : 38234.103us 00:09:44.459 99.99900% : 38234.103us 00:09:44.459 99.99990% : 38234.103us 00:09:44.459 99.99999% : 38234.103us 00:09:44.459 00:09:44.459 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:44.459 ================================================================================= 00:09:44.459 1.00000% : 9901.946us 00:09:44.459 10.00000% : 10703.259us 00:09:44.459 25.00000% : 11161.153us 00:09:44.459 50.00000% : 12878.253us 00:09:44.459 75.00000% : 15453.904us 00:09:44.459 90.00000% : 17056.531us 00:09:44.459 95.00000% : 17972.318us 00:09:44.459 98.00000% : 19231.525us 00:09:44.459 99.00000% : 26214.400us 00:09:44.459 99.50000% : 37089.369us 00:09:44.459 99.90000% : 38463.050us 00:09:44.459 99.99000% : 38691.997us 00:09:44.459 99.99900% : 38691.997us 00:09:44.459 99.99990% : 38691.997us 00:09:44.459 99.99999% : 38691.997us 00:09:44.459 00:09:44.459 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:44.459 ================================================================================= 00:09:44.459 1.00000% : 10016.419us 00:09:44.459 10.00000% : 10760.496us 00:09:44.459 25.00000% : 11218.390us 00:09:44.459 50.00000% : 12878.253us 00:09:44.459 75.00000% : 15453.904us 00:09:44.459 90.00000% : 16942.058us 00:09:44.459 95.00000% : 17972.318us 00:09:44.459 98.00000% : 18888.105us 00:09:44.459 99.00000% : 27130.187us 00:09:44.459 99.50000% : 38005.156us 00:09:44.459 99.90000% : 39149.890us 00:09:44.459 99.99000% : 39378.837us 00:09:44.459 99.99900% : 39378.837us 00:09:44.459 99.99990% : 39378.837us 00:09:44.459 99.99999% : 39378.837us 00:09:44.459 00:09:44.459 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:44.459 ================================================================================= 00:09:44.459 1.00000% : 9787.472us 00:09:44.459 10.00000% : 10703.259us 00:09:44.459 25.00000% : 11103.916us 00:09:44.459 50.00000% : 12763.780us 00:09:44.459 75.00000% : 15453.904us 00:09:44.459 90.00000% : 17171.004us 00:09:44.459 95.00000% : 18315.738us 00:09:44.459 98.00000% : 18773.631us 00:09:44.459 99.00000% : 27702.554us 00:09:44.459 99.50000% : 38234.103us 00:09:44.459 99.90000% : 39378.837us 00:09:44.459 99.99000% : 39607.783us 00:09:44.459 99.99900% : 39607.783us 00:09:44.459 99.99990% : 39607.783us 00:09:44.459 99.99999% : 39607.783us 00:09:44.459 00:09:44.459 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:44.459 ================================================================================= 00:09:44.459 1.00000% : 9844.709us 00:09:44.459 10.00000% : 10760.496us 00:09:44.459 25.00000% : 11161.153us 00:09:44.459 50.00000% : 12821.017us 00:09:44.459 75.00000% : 15568.377us 00:09:44.459 90.00000% : 17171.004us 00:09:44.459 95.00000% : 17743.371us 00:09:44.459 98.00000% : 18544.685us 00:09:44.459 99.00000% : 28160.447us 00:09:44.459 99.50000% : 38463.050us 00:09:44.459 99.90000% : 39378.837us 00:09:44.459 99.99000% : 39607.783us 00:09:44.459 99.99900% : 39607.783us 00:09:44.459 99.99990% : 39607.783us 00:09:44.459 99.99999% : 39607.783us 00:09:44.459 00:09:44.459 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:44.459 ================================================================================= 00:09:44.459 1.00000% : 9615.762us 00:09:44.459 10.00000% : 10703.259us 00:09:44.459 25.00000% : 11161.153us 00:09:44.459 50.00000% : 12992.727us 00:09:44.459 75.00000% : 15453.904us 00:09:44.459 90.00000% : 17171.004us 00:09:44.459 95.00000% : 17857.845us 00:09:44.459 98.00000% : 18773.631us 00:09:44.459 99.00000% : 19345.998us 00:09:44.459 99.50000% : 28618.341us 00:09:44.459 99.90000% : 29992.021us 00:09:44.459 99.99000% : 30220.968us 00:09:44.459 99.99900% : 30220.968us 00:09:44.459 99.99990% : 30220.968us 00:09:44.459 99.99999% : 30220.968us 00:09:44.459 00:09:44.459 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:44.459 ============================================================================== 00:09:44.459 Range in us Cumulative IO count 00:09:44.459 8471.029 - 8528.266: 0.0528% ( 5) 00:09:44.459 8528.266 - 8585.502: 0.0739% ( 2) 00:09:44.459 8585.502 - 8642.739: 0.0950% ( 2) 00:09:44.459 8642.739 - 8699.976: 0.2323% ( 13) 00:09:44.459 8699.976 - 8757.212: 0.2956% ( 6) 00:09:44.459 8757.212 - 8814.449: 0.3801% ( 8) 00:09:44.459 8814.449 - 8871.686: 0.4223% ( 4) 00:09:44.459 8871.686 - 8928.922: 0.4434% ( 2) 00:09:44.459 8928.922 - 8986.159: 0.4645% ( 2) 00:09:44.459 8986.159 - 9043.396: 0.4751% ( 1) 00:09:44.459 9100.632 - 9157.869: 0.4856% ( 1) 00:09:44.459 9157.869 - 9215.106: 0.5701% ( 8) 00:09:44.459 9215.106 - 9272.342: 0.5912% ( 2) 00:09:44.459 9386.816 - 9444.052: 0.6334% ( 4) 00:09:44.459 9444.052 - 9501.289: 0.6862% ( 5) 00:09:44.459 9501.289 - 9558.526: 0.6968% ( 1) 00:09:44.459 9558.526 - 9615.762: 0.7601% ( 6) 00:09:44.459 9615.762 - 9672.999: 0.9079% ( 14) 00:09:44.459 9672.999 - 9730.236: 0.9396% ( 3) 00:09:44.459 9730.236 - 9787.472: 0.9924% ( 5) 00:09:44.459 9787.472 - 9844.709: 1.0346% ( 4) 00:09:44.459 9844.709 - 9901.946: 1.0874% ( 5) 00:09:44.459 9901.946 - 9959.183: 1.2352% ( 14) 00:09:44.459 9959.183 - 10016.419: 1.5308% ( 28) 00:09:44.459 10016.419 - 10073.656: 1.9426% ( 39) 00:09:44.459 10073.656 - 10130.893: 2.3226% ( 36) 00:09:44.459 10130.893 - 10188.129: 2.7660% ( 42) 00:09:44.459 10188.129 - 10245.366: 3.3045% ( 51) 00:09:44.459 10245.366 - 10302.603: 3.8640% ( 53) 00:09:44.459 10302.603 - 10359.839: 4.8142% ( 90) 00:09:44.459 10359.839 - 10417.076: 5.6166% ( 76) 00:09:44.459 10417.076 - 10474.313: 6.6195% ( 95) 00:09:44.459 10474.313 - 10531.549: 7.5908% ( 92) 00:09:44.459 10531.549 - 10588.786: 8.7204% ( 107) 00:09:44.459 10588.786 - 10646.023: 10.0507% ( 126) 00:09:44.459 10646.023 - 10703.259: 11.5076% ( 138) 00:09:44.459 10703.259 - 10760.496: 12.9223% ( 134) 00:09:44.459 10760.496 - 10817.733: 14.5693% ( 156) 00:09:44.459 10817.733 - 10874.969: 16.3746% ( 171) 00:09:44.459 10874.969 - 10932.206: 18.0004% ( 154) 00:09:44.459 10932.206 - 10989.443: 19.7213% ( 163) 00:09:44.459 10989.443 - 11046.679: 21.2521% ( 145) 00:09:44.459 11046.679 - 11103.916: 22.7196% ( 139) 00:09:44.459 11103.916 - 11161.153: 23.9654% ( 118) 00:09:44.459 11161.153 - 11218.390: 25.4962% ( 145) 00:09:44.459 11218.390 - 11275.626: 26.6681% ( 111) 00:09:44.460 11275.626 - 11332.863: 27.7872% ( 106) 00:09:44.460 11332.863 - 11390.100: 28.8640% ( 102) 00:09:44.460 11390.100 - 11447.336: 30.0781% ( 115) 00:09:44.460 11447.336 - 11504.573: 31.3556% ( 121) 00:09:44.460 11504.573 - 11561.810: 32.5697% ( 115) 00:09:44.460 11561.810 - 11619.046: 33.7838% ( 115) 00:09:44.460 11619.046 - 11676.283: 34.8184% ( 98) 00:09:44.460 11676.283 - 11733.520: 35.6524% ( 79) 00:09:44.460 11733.520 - 11790.756: 36.4865% ( 79) 00:09:44.460 11790.756 - 11847.993: 37.1833% ( 66) 00:09:44.460 11847.993 - 11905.230: 37.9012% ( 68) 00:09:44.460 11905.230 - 11962.466: 38.6085% ( 67) 00:09:44.460 11962.466 - 12019.703: 39.3687% ( 72) 00:09:44.460 12019.703 - 12076.940: 40.1182% ( 71) 00:09:44.460 12076.940 - 12134.176: 40.9206% ( 76) 00:09:44.460 12134.176 - 12191.413: 41.7335% ( 77) 00:09:44.460 12191.413 - 12248.650: 42.5253% ( 75) 00:09:44.460 12248.650 - 12305.886: 43.5389% ( 96) 00:09:44.460 12305.886 - 12363.123: 44.3729% ( 79) 00:09:44.460 12363.123 - 12420.360: 45.0802% ( 67) 00:09:44.460 12420.360 - 12477.597: 45.6398% ( 53) 00:09:44.460 12477.597 - 12534.833: 46.5160% ( 83) 00:09:44.460 12534.833 - 12592.070: 47.0650% ( 52) 00:09:44.460 12592.070 - 12649.307: 47.5823% ( 49) 00:09:44.460 12649.307 - 12706.543: 48.1102% ( 50) 00:09:44.460 12706.543 - 12763.780: 48.5114% ( 38) 00:09:44.460 12763.780 - 12821.017: 49.0076% ( 47) 00:09:44.460 12821.017 - 12878.253: 49.5988% ( 56) 00:09:44.460 12878.253 - 12935.490: 50.2639% ( 63) 00:09:44.460 12935.490 - 12992.727: 51.0030% ( 70) 00:09:44.460 12992.727 - 13049.963: 51.6364% ( 60) 00:09:44.460 13049.963 - 13107.200: 52.1643% ( 50) 00:09:44.460 13107.200 - 13164.437: 52.5760% ( 39) 00:09:44.460 13164.437 - 13221.673: 53.3045% ( 69) 00:09:44.460 13221.673 - 13278.910: 53.8957% ( 56) 00:09:44.460 13278.910 - 13336.147: 54.3602% ( 44) 00:09:44.460 13336.147 - 13393.383: 54.9092% ( 52) 00:09:44.460 13393.383 - 13450.620: 55.3421% ( 41) 00:09:44.460 13450.620 - 13507.857: 55.8805% ( 51) 00:09:44.460 13507.857 - 13565.093: 56.3767% ( 47) 00:09:44.460 13565.093 - 13622.330: 56.9785% ( 57) 00:09:44.460 13622.330 - 13679.567: 57.5486% ( 54) 00:09:44.460 13679.567 - 13736.803: 58.1081% ( 53) 00:09:44.460 13736.803 - 13794.040: 58.9844% ( 83) 00:09:44.460 13794.040 - 13851.277: 59.8606% ( 83) 00:09:44.460 13851.277 - 13908.514: 60.5152% ( 62) 00:09:44.460 13908.514 - 13965.750: 60.9903% ( 45) 00:09:44.460 13965.750 - 14022.987: 61.5393% ( 52) 00:09:44.460 14022.987 - 14080.224: 61.9827% ( 42) 00:09:44.460 14080.224 - 14137.460: 62.4367% ( 43) 00:09:44.460 14137.460 - 14194.697: 62.9751% ( 51) 00:09:44.460 14194.697 - 14251.934: 63.5346% ( 53) 00:09:44.460 14251.934 - 14309.170: 64.1892% ( 62) 00:09:44.460 14309.170 - 14366.407: 64.7487% ( 53) 00:09:44.460 14366.407 - 14423.644: 65.3294% ( 55) 00:09:44.460 14423.644 - 14480.880: 65.9206% ( 56) 00:09:44.460 14480.880 - 14538.117: 66.5541% ( 60) 00:09:44.460 14538.117 - 14595.354: 66.9869% ( 41) 00:09:44.460 14595.354 - 14652.590: 67.6943% ( 67) 00:09:44.460 14652.590 - 14767.064: 68.9611% ( 120) 00:09:44.460 14767.064 - 14881.537: 70.3758% ( 134) 00:09:44.460 14881.537 - 14996.010: 71.6322% ( 119) 00:09:44.460 14996.010 - 15110.484: 72.6774% ( 99) 00:09:44.460 15110.484 - 15224.957: 73.7226% ( 99) 00:09:44.460 15224.957 - 15339.431: 74.5777% ( 81) 00:09:44.460 15339.431 - 15453.904: 75.3801% ( 76) 00:09:44.460 15453.904 - 15568.377: 76.4147% ( 98) 00:09:44.460 15568.377 - 15682.851: 77.4177% ( 95) 00:09:44.460 15682.851 - 15797.324: 78.4840% ( 101) 00:09:44.460 15797.324 - 15911.797: 79.5608% ( 102) 00:09:44.460 15911.797 - 16026.271: 80.4476% ( 84) 00:09:44.460 16026.271 - 16140.744: 81.4506% ( 95) 00:09:44.460 16140.744 - 16255.217: 82.5908% ( 108) 00:09:44.460 16255.217 - 16369.691: 83.6888% ( 104) 00:09:44.460 16369.691 - 16484.164: 84.8079% ( 106) 00:09:44.460 16484.164 - 16598.638: 85.7264% ( 87) 00:09:44.460 16598.638 - 16713.111: 86.6871% ( 91) 00:09:44.460 16713.111 - 16827.584: 87.6900% ( 95) 00:09:44.460 16827.584 - 16942.058: 88.9464% ( 119) 00:09:44.460 16942.058 - 17056.531: 90.0127% ( 101) 00:09:44.460 17056.531 - 17171.004: 91.1318% ( 106) 00:09:44.460 17171.004 - 17285.478: 91.8813% ( 71) 00:09:44.460 17285.478 - 17399.951: 92.7576% ( 83) 00:09:44.460 17399.951 - 17514.424: 93.3910% ( 60) 00:09:44.460 17514.424 - 17628.898: 93.8872% ( 47) 00:09:44.460 17628.898 - 17743.371: 94.3623% ( 45) 00:09:44.460 17743.371 - 17857.845: 94.7530% ( 37) 00:09:44.460 17857.845 - 17972.318: 95.0802% ( 31) 00:09:44.460 17972.318 - 18086.791: 95.4709% ( 37) 00:09:44.460 18086.791 - 18201.265: 95.9248% ( 43) 00:09:44.460 18201.265 - 18315.738: 96.3577% ( 41) 00:09:44.460 18315.738 - 18430.211: 96.9595% ( 57) 00:09:44.460 18430.211 - 18544.685: 97.2973% ( 32) 00:09:44.460 18544.685 - 18659.158: 97.5929% ( 28) 00:09:44.460 18659.158 - 18773.631: 97.7196% ( 12) 00:09:44.460 18773.631 - 18888.105: 97.8041% ( 8) 00:09:44.460 18888.105 - 19002.578: 97.9307% ( 12) 00:09:44.460 19002.578 - 19117.052: 98.0469% ( 11) 00:09:44.460 19117.052 - 19231.525: 98.0997% ( 5) 00:09:44.460 19231.525 - 19345.998: 98.1419% ( 4) 00:09:44.460 19345.998 - 19460.472: 98.2052% ( 6) 00:09:44.460 19460.472 - 19574.945: 98.2369% ( 3) 00:09:44.460 19574.945 - 19689.418: 98.2791% ( 4) 00:09:44.460 19689.418 - 19803.892: 98.3214% ( 4) 00:09:44.460 19803.892 - 19918.365: 98.3530% ( 3) 00:09:44.460 19918.365 - 20032.838: 98.3953% ( 4) 00:09:44.460 20032.838 - 20147.312: 98.4269% ( 3) 00:09:44.460 20147.312 - 20261.785: 98.4692% ( 4) 00:09:44.460 20261.785 - 20376.259: 98.5114% ( 4) 00:09:44.460 20376.259 - 20490.732: 98.5536% ( 4) 00:09:44.460 20490.732 - 20605.205: 98.6064% ( 5) 00:09:44.460 20605.205 - 20719.679: 98.6486% ( 4) 00:09:44.460 24840.720 - 24955.193: 98.6803% ( 3) 00:09:44.460 24955.193 - 25069.666: 98.7542% ( 7) 00:09:44.460 25069.666 - 25184.140: 98.8176% ( 6) 00:09:44.460 25184.140 - 25298.613: 98.8704% ( 5) 00:09:44.460 25298.613 - 25413.086: 98.9337% ( 6) 00:09:44.460 25413.086 - 25527.560: 98.9970% ( 6) 00:09:44.460 25527.560 - 25642.033: 99.0604% ( 6) 00:09:44.460 25642.033 - 25756.507: 99.1343% ( 7) 00:09:44.460 25756.507 - 25870.980: 99.1976% ( 6) 00:09:44.460 25870.980 - 25985.453: 99.2610% ( 6) 00:09:44.460 25985.453 - 26099.927: 99.3138% ( 5) 00:09:44.460 26099.927 - 26214.400: 99.3243% ( 1) 00:09:44.460 35486.742 - 35715.689: 99.3666% ( 4) 00:09:44.460 35715.689 - 35944.636: 99.4193% ( 5) 00:09:44.460 35944.636 - 36173.583: 99.4827% ( 6) 00:09:44.460 36173.583 - 36402.529: 99.5355% ( 5) 00:09:44.460 36402.529 - 36631.476: 99.5988% ( 6) 00:09:44.460 36631.476 - 36860.423: 99.6622% ( 6) 00:09:44.460 36860.423 - 37089.369: 99.7255% ( 6) 00:09:44.460 37089.369 - 37318.316: 99.7783% ( 5) 00:09:44.460 37318.316 - 37547.263: 99.8416% ( 6) 00:09:44.460 37547.263 - 37776.210: 99.9155% ( 7) 00:09:44.460 37776.210 - 38005.156: 99.9789% ( 6) 00:09:44.460 38005.156 - 38234.103: 100.0000% ( 2) 00:09:44.460 00:09:44.460 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:44.460 ============================================================================== 00:09:44.460 Range in us Cumulative IO count 00:09:44.460 8299.319 - 8356.555: 0.0106% ( 1) 00:09:44.460 8356.555 - 8413.792: 0.0845% ( 7) 00:09:44.460 8413.792 - 8471.029: 0.1795% ( 9) 00:09:44.460 8471.029 - 8528.266: 0.2745% ( 9) 00:09:44.460 8528.266 - 8585.502: 0.4223% ( 14) 00:09:44.460 8585.502 - 8642.739: 0.4856% ( 6) 00:09:44.460 8642.739 - 8699.976: 0.5384% ( 5) 00:09:44.460 8699.976 - 8757.212: 0.5701% ( 3) 00:09:44.460 8757.212 - 8814.449: 0.6018% ( 3) 00:09:44.460 8814.449 - 8871.686: 0.6334% ( 3) 00:09:44.460 8871.686 - 8928.922: 0.6651% ( 3) 00:09:44.460 8928.922 - 8986.159: 0.6757% ( 1) 00:09:44.460 9558.526 - 9615.762: 0.6862% ( 1) 00:09:44.460 9615.762 - 9672.999: 0.7073% ( 2) 00:09:44.460 9672.999 - 9730.236: 0.7285% ( 2) 00:09:44.460 9730.236 - 9787.472: 0.7812% ( 5) 00:09:44.460 9787.472 - 9844.709: 0.8974% ( 11) 00:09:44.460 9844.709 - 9901.946: 1.0346% ( 13) 00:09:44.460 9901.946 - 9959.183: 1.2141% ( 17) 00:09:44.460 9959.183 - 10016.419: 1.4886% ( 26) 00:09:44.460 10016.419 - 10073.656: 1.7948% ( 29) 00:09:44.460 10073.656 - 10130.893: 1.9742% ( 17) 00:09:44.460 10130.893 - 10188.129: 2.2698% ( 28) 00:09:44.460 10188.129 - 10245.366: 2.7238% ( 43) 00:09:44.460 10245.366 - 10302.603: 3.2095% ( 46) 00:09:44.460 10302.603 - 10359.839: 3.7796% ( 54) 00:09:44.460 10359.839 - 10417.076: 4.5503% ( 73) 00:09:44.460 10417.076 - 10474.313: 5.3315% ( 74) 00:09:44.460 10474.313 - 10531.549: 6.3661% ( 98) 00:09:44.460 10531.549 - 10588.786: 7.6330% ( 120) 00:09:44.460 10588.786 - 10646.023: 8.8577% ( 116) 00:09:44.460 10646.023 - 10703.259: 10.5258% ( 158) 00:09:44.460 10703.259 - 10760.496: 12.3100% ( 169) 00:09:44.460 10760.496 - 10817.733: 14.0414% ( 164) 00:09:44.460 10817.733 - 10874.969: 15.9734% ( 183) 00:09:44.460 10874.969 - 10932.206: 18.0954% ( 201) 00:09:44.460 10932.206 - 10989.443: 19.8902% ( 170) 00:09:44.460 10989.443 - 11046.679: 21.9911% ( 199) 00:09:44.460 11046.679 - 11103.916: 23.8809% ( 179) 00:09:44.460 11103.916 - 11161.153: 25.5490% ( 158) 00:09:44.460 11161.153 - 11218.390: 27.0481% ( 142) 00:09:44.460 11218.390 - 11275.626: 28.5579% ( 143) 00:09:44.460 11275.626 - 11332.863: 29.8036% ( 118) 00:09:44.461 11332.863 - 11390.100: 31.0494% ( 118) 00:09:44.461 11390.100 - 11447.336: 32.3796% ( 126) 00:09:44.461 11447.336 - 11504.573: 33.5938% ( 115) 00:09:44.461 11504.573 - 11561.810: 34.6284% ( 98) 00:09:44.461 11561.810 - 11619.046: 35.6102% ( 93) 00:09:44.461 11619.046 - 11676.283: 36.6343% ( 97) 00:09:44.461 11676.283 - 11733.520: 37.7323% ( 104) 00:09:44.461 11733.520 - 11790.756: 38.7247% ( 94) 00:09:44.461 11790.756 - 11847.993: 39.3792% ( 62) 00:09:44.461 11847.993 - 11905.230: 39.9916% ( 58) 00:09:44.461 11905.230 - 11962.466: 40.7095% ( 68) 00:09:44.461 11962.466 - 12019.703: 41.3851% ( 64) 00:09:44.461 12019.703 - 12076.940: 42.0080% ( 59) 00:09:44.461 12076.940 - 12134.176: 42.7259% ( 68) 00:09:44.461 12134.176 - 12191.413: 43.4966% ( 73) 00:09:44.461 12191.413 - 12248.650: 43.9928% ( 47) 00:09:44.461 12248.650 - 12305.886: 44.5101% ( 49) 00:09:44.461 12305.886 - 12363.123: 44.8057% ( 28) 00:09:44.461 12363.123 - 12420.360: 45.0802% ( 26) 00:09:44.461 12420.360 - 12477.597: 45.4392% ( 34) 00:09:44.461 12477.597 - 12534.833: 46.0093% ( 54) 00:09:44.461 12534.833 - 12592.070: 46.9278% ( 87) 00:09:44.461 12592.070 - 12649.307: 47.8357% ( 86) 00:09:44.461 12649.307 - 12706.543: 48.7120% ( 83) 00:09:44.461 12706.543 - 12763.780: 49.4510% ( 70) 00:09:44.461 12763.780 - 12821.017: 49.7677% ( 30) 00:09:44.461 12821.017 - 12878.253: 50.2217% ( 43) 00:09:44.461 12878.253 - 12935.490: 50.7285% ( 48) 00:09:44.461 12935.490 - 12992.727: 51.2774% ( 52) 00:09:44.461 12992.727 - 13049.963: 51.9637% ( 65) 00:09:44.461 13049.963 - 13107.200: 52.6816% ( 68) 00:09:44.461 13107.200 - 13164.437: 53.3467% ( 63) 00:09:44.461 13164.437 - 13221.673: 53.9168% ( 54) 00:09:44.461 13221.673 - 13278.910: 54.5080% ( 56) 00:09:44.461 13278.910 - 13336.147: 54.9514% ( 42) 00:09:44.461 13336.147 - 13393.383: 55.3315% ( 36) 00:09:44.461 13393.383 - 13450.620: 55.5638% ( 22) 00:09:44.461 13450.620 - 13507.857: 55.8277% ( 25) 00:09:44.461 13507.857 - 13565.093: 56.0811% ( 24) 00:09:44.461 13565.093 - 13622.330: 56.4611% ( 36) 00:09:44.461 13622.330 - 13679.567: 56.8834% ( 40) 00:09:44.461 13679.567 - 13736.803: 57.2741% ( 37) 00:09:44.461 13736.803 - 13794.040: 57.6541% ( 36) 00:09:44.461 13794.040 - 13851.277: 58.0448% ( 37) 00:09:44.461 13851.277 - 13908.514: 58.5304% ( 46) 00:09:44.461 13908.514 - 13965.750: 59.1111% ( 55) 00:09:44.461 13965.750 - 14022.987: 59.7762% ( 63) 00:09:44.461 14022.987 - 14080.224: 60.3568% ( 55) 00:09:44.461 14080.224 - 14137.460: 61.0536% ( 66) 00:09:44.461 14137.460 - 14194.697: 61.8243% ( 73) 00:09:44.461 14194.697 - 14251.934: 62.3733% ( 52) 00:09:44.461 14251.934 - 14309.170: 62.9223% ( 52) 00:09:44.461 14309.170 - 14366.407: 63.5241% ( 57) 00:09:44.461 14366.407 - 14423.644: 64.0519% ( 50) 00:09:44.461 14423.644 - 14480.880: 64.6643% ( 58) 00:09:44.461 14480.880 - 14538.117: 65.4139% ( 71) 00:09:44.461 14538.117 - 14595.354: 66.2479% ( 79) 00:09:44.461 14595.354 - 14652.590: 67.0291% ( 74) 00:09:44.461 14652.590 - 14767.064: 68.3171% ( 122) 00:09:44.461 14767.064 - 14881.537: 69.7530% ( 136) 00:09:44.461 14881.537 - 14996.010: 70.5448% ( 75) 00:09:44.461 14996.010 - 15110.484: 71.5055% ( 91) 00:09:44.461 15110.484 - 15224.957: 72.6985% ( 113) 00:09:44.461 15224.957 - 15339.431: 73.8809% ( 112) 00:09:44.461 15339.431 - 15453.904: 75.2323% ( 128) 00:09:44.461 15453.904 - 15568.377: 76.5625% ( 126) 00:09:44.461 15568.377 - 15682.851: 77.7872% ( 116) 00:09:44.461 15682.851 - 15797.324: 79.1385% ( 128) 00:09:44.461 15797.324 - 15911.797: 80.3843% ( 118) 00:09:44.461 15911.797 - 16026.271: 81.4717% ( 103) 00:09:44.461 16026.271 - 16140.744: 82.6753% ( 114) 00:09:44.461 16140.744 - 16255.217: 83.8366% ( 110) 00:09:44.461 16255.217 - 16369.691: 84.8923% ( 100) 00:09:44.461 16369.691 - 16484.164: 85.8847% ( 94) 00:09:44.461 16484.164 - 16598.638: 86.9827% ( 104) 00:09:44.461 16598.638 - 16713.111: 87.8906% ( 86) 00:09:44.461 16713.111 - 16827.584: 88.6719% ( 74) 00:09:44.461 16827.584 - 16942.058: 89.5481% ( 83) 00:09:44.461 16942.058 - 17056.531: 90.3716% ( 78) 00:09:44.461 17056.531 - 17171.004: 91.0684% ( 66) 00:09:44.461 17171.004 - 17285.478: 91.5541% ( 46) 00:09:44.461 17285.478 - 17399.951: 92.0819% ( 50) 00:09:44.461 17399.951 - 17514.424: 92.6520% ( 54) 00:09:44.461 17514.424 - 17628.898: 93.1588% ( 48) 00:09:44.461 17628.898 - 17743.371: 93.8978% ( 70) 00:09:44.461 17743.371 - 17857.845: 94.5524% ( 62) 00:09:44.461 17857.845 - 17972.318: 95.3019% ( 71) 00:09:44.461 17972.318 - 18086.791: 95.8932% ( 56) 00:09:44.461 18086.791 - 18201.265: 96.5266% ( 60) 00:09:44.461 18201.265 - 18315.738: 96.8539% ( 31) 00:09:44.461 18315.738 - 18430.211: 97.0334% ( 17) 00:09:44.461 18430.211 - 18544.685: 97.1601% ( 12) 00:09:44.461 18544.685 - 18659.158: 97.2551% ( 9) 00:09:44.461 18659.158 - 18773.631: 97.4029% ( 14) 00:09:44.461 18773.631 - 18888.105: 97.4979% ( 9) 00:09:44.461 18888.105 - 19002.578: 97.6457% ( 14) 00:09:44.461 19002.578 - 19117.052: 97.8780% ( 22) 00:09:44.461 19117.052 - 19231.525: 98.1208% ( 23) 00:09:44.461 19231.525 - 19345.998: 98.2791% ( 15) 00:09:44.461 19345.998 - 19460.472: 98.3530% ( 7) 00:09:44.461 19460.472 - 19574.945: 98.5114% ( 15) 00:09:44.461 19574.945 - 19689.418: 98.5959% ( 8) 00:09:44.461 19689.418 - 19803.892: 98.6486% ( 5) 00:09:44.461 25527.560 - 25642.033: 98.6909% ( 4) 00:09:44.461 25642.033 - 25756.507: 98.7648% ( 7) 00:09:44.461 25756.507 - 25870.980: 98.8281% ( 6) 00:09:44.461 25870.980 - 25985.453: 98.9020% ( 7) 00:09:44.461 25985.453 - 26099.927: 98.9759% ( 7) 00:09:44.461 26099.927 - 26214.400: 99.0498% ( 7) 00:09:44.461 26214.400 - 26328.873: 99.1237% ( 7) 00:09:44.461 26328.873 - 26443.347: 99.1976% ( 7) 00:09:44.461 26443.347 - 26557.820: 99.2610% ( 6) 00:09:44.461 26557.820 - 26672.293: 99.3243% ( 6) 00:09:44.461 36173.583 - 36402.529: 99.3454% ( 2) 00:09:44.461 36402.529 - 36631.476: 99.3982% ( 5) 00:09:44.461 36631.476 - 36860.423: 99.4721% ( 7) 00:09:44.461 36860.423 - 37089.369: 99.5460% ( 7) 00:09:44.461 37089.369 - 37318.316: 99.6094% ( 6) 00:09:44.461 37318.316 - 37547.263: 99.6833% ( 7) 00:09:44.461 37547.263 - 37776.210: 99.7572% ( 7) 00:09:44.461 37776.210 - 38005.156: 99.8205% ( 6) 00:09:44.461 38005.156 - 38234.103: 99.8944% ( 7) 00:09:44.461 38234.103 - 38463.050: 99.9683% ( 7) 00:09:44.461 38463.050 - 38691.997: 100.0000% ( 3) 00:09:44.461 00:09:44.461 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:44.461 ============================================================================== 00:09:44.461 Range in us Cumulative IO count 00:09:44.461 6839.783 - 6868.402: 0.0106% ( 1) 00:09:44.461 7011.493 - 7040.112: 0.0211% ( 1) 00:09:44.461 7068.730 - 7097.348: 0.0845% ( 6) 00:09:44.461 7097.348 - 7125.967: 0.1372% ( 5) 00:09:44.461 7125.967 - 7154.585: 0.2323% ( 9) 00:09:44.461 7154.585 - 7183.203: 0.3378% ( 10) 00:09:44.461 7183.203 - 7211.822: 0.4223% ( 8) 00:09:44.461 7211.822 - 7240.440: 0.4540% ( 3) 00:09:44.461 7240.440 - 7269.059: 0.4645% ( 1) 00:09:44.461 7269.059 - 7297.677: 0.4751% ( 1) 00:09:44.461 7297.677 - 7326.295: 0.4962% ( 2) 00:09:44.461 7326.295 - 7383.532: 0.5279% ( 3) 00:09:44.461 7383.532 - 7440.769: 0.5595% ( 3) 00:09:44.461 7440.769 - 7498.005: 0.5912% ( 3) 00:09:44.461 7498.005 - 7555.242: 0.6229% ( 3) 00:09:44.461 7555.242 - 7612.479: 0.6546% ( 3) 00:09:44.461 7612.479 - 7669.715: 0.6757% ( 2) 00:09:44.461 9730.236 - 9787.472: 0.6862% ( 1) 00:09:44.461 9787.472 - 9844.709: 0.7390% ( 5) 00:09:44.461 9844.709 - 9901.946: 0.8446% ( 10) 00:09:44.461 9901.946 - 9959.183: 0.9291% ( 8) 00:09:44.461 9959.183 - 10016.419: 1.2141% ( 27) 00:09:44.461 10016.419 - 10073.656: 1.5414% ( 31) 00:09:44.461 10073.656 - 10130.893: 1.9848% ( 42) 00:09:44.461 10130.893 - 10188.129: 2.4388% ( 43) 00:09:44.461 10188.129 - 10245.366: 3.1778% ( 70) 00:09:44.461 10245.366 - 10302.603: 3.7057% ( 50) 00:09:44.461 10302.603 - 10359.839: 4.3074% ( 57) 00:09:44.461 10359.839 - 10417.076: 5.2154% ( 86) 00:09:44.461 10417.076 - 10474.313: 5.9227% ( 67) 00:09:44.461 10474.313 - 10531.549: 6.6512% ( 69) 00:09:44.461 10531.549 - 10588.786: 7.4641% ( 77) 00:09:44.461 10588.786 - 10646.023: 8.5621% ( 104) 00:09:44.461 10646.023 - 10703.259: 9.9451% ( 131) 00:09:44.461 10703.259 - 10760.496: 11.6448% ( 161) 00:09:44.461 10760.496 - 10817.733: 13.4079% ( 167) 00:09:44.461 10817.733 - 10874.969: 15.5194% ( 200) 00:09:44.461 10874.969 - 10932.206: 17.8632% ( 222) 00:09:44.461 10932.206 - 10989.443: 19.6896% ( 173) 00:09:44.461 10989.443 - 11046.679: 21.5583% ( 177) 00:09:44.461 11046.679 - 11103.916: 23.3108% ( 166) 00:09:44.461 11103.916 - 11161.153: 24.9894% ( 159) 00:09:44.461 11161.153 - 11218.390: 26.9320% ( 184) 00:09:44.461 11218.390 - 11275.626: 28.5473% ( 153) 00:09:44.461 11275.626 - 11332.863: 29.9514% ( 133) 00:09:44.461 11332.863 - 11390.100: 31.1444% ( 113) 00:09:44.461 11390.100 - 11447.336: 32.4958% ( 128) 00:09:44.461 11447.336 - 11504.573: 33.7943% ( 123) 00:09:44.462 11504.573 - 11561.810: 35.0718% ( 121) 00:09:44.462 11561.810 - 11619.046: 36.1592% ( 103) 00:09:44.462 11619.046 - 11676.283: 37.1410% ( 93) 00:09:44.462 11676.283 - 11733.520: 38.4924% ( 128) 00:09:44.462 11733.520 - 11790.756: 39.3053% ( 77) 00:09:44.462 11790.756 - 11847.993: 40.0338% ( 69) 00:09:44.462 11847.993 - 11905.230: 40.6989% ( 63) 00:09:44.462 11905.230 - 11962.466: 41.1951% ( 47) 00:09:44.462 11962.466 - 12019.703: 41.6491% ( 43) 00:09:44.462 12019.703 - 12076.940: 42.0186% ( 35) 00:09:44.462 12076.940 - 12134.176: 42.3142% ( 28) 00:09:44.462 12134.176 - 12191.413: 42.6415% ( 31) 00:09:44.462 12191.413 - 12248.650: 42.9160% ( 26) 00:09:44.462 12248.650 - 12305.886: 43.3171% ( 38) 00:09:44.462 12305.886 - 12363.123: 43.6655% ( 33) 00:09:44.462 12363.123 - 12420.360: 44.0773% ( 39) 00:09:44.462 12420.360 - 12477.597: 44.5735% ( 47) 00:09:44.462 12477.597 - 12534.833: 45.1225% ( 52) 00:09:44.462 12534.833 - 12592.070: 46.0726% ( 90) 00:09:44.462 12592.070 - 12649.307: 46.7272% ( 62) 00:09:44.462 12649.307 - 12706.543: 47.6774% ( 90) 00:09:44.462 12706.543 - 12763.780: 48.7120% ( 98) 00:09:44.462 12763.780 - 12821.017: 49.6516% ( 89) 00:09:44.462 12821.017 - 12878.253: 50.4962% ( 80) 00:09:44.462 12878.253 - 12935.490: 51.0663% ( 54) 00:09:44.462 12935.490 - 12992.727: 51.6047% ( 51) 00:09:44.462 12992.727 - 13049.963: 52.2804% ( 64) 00:09:44.462 13049.963 - 13107.200: 52.7344% ( 43) 00:09:44.462 13107.200 - 13164.437: 53.1989% ( 44) 00:09:44.462 13164.437 - 13221.673: 53.6106% ( 39) 00:09:44.462 13221.673 - 13278.910: 54.0013% ( 37) 00:09:44.462 13278.910 - 13336.147: 54.4658% ( 44) 00:09:44.462 13336.147 - 13393.383: 54.8353% ( 35) 00:09:44.462 13393.383 - 13450.620: 55.2048% ( 35) 00:09:44.462 13450.620 - 13507.857: 55.6166% ( 39) 00:09:44.462 13507.857 - 13565.093: 56.0705% ( 43) 00:09:44.462 13565.093 - 13622.330: 56.6406% ( 54) 00:09:44.462 13622.330 - 13679.567: 57.2952% ( 62) 00:09:44.462 13679.567 - 13736.803: 57.9286% ( 60) 00:09:44.462 13736.803 - 13794.040: 58.5410% ( 58) 00:09:44.462 13794.040 - 13851.277: 59.2694% ( 69) 00:09:44.462 13851.277 - 13908.514: 59.7867% ( 49) 00:09:44.462 13908.514 - 13965.750: 60.3041% ( 49) 00:09:44.462 13965.750 - 14022.987: 60.8003% ( 47) 00:09:44.462 14022.987 - 14080.224: 61.3598% ( 53) 00:09:44.462 14080.224 - 14137.460: 61.8032% ( 42) 00:09:44.462 14137.460 - 14194.697: 62.3522% ( 52) 00:09:44.462 14194.697 - 14251.934: 62.8484% ( 47) 00:09:44.462 14251.934 - 14309.170: 63.3868% ( 51) 00:09:44.462 14309.170 - 14366.407: 63.8830% ( 47) 00:09:44.462 14366.407 - 14423.644: 64.5481% ( 63) 00:09:44.462 14423.644 - 14480.880: 65.3505% ( 76) 00:09:44.462 14480.880 - 14538.117: 66.2479% ( 85) 00:09:44.462 14538.117 - 14595.354: 66.9236% ( 64) 00:09:44.462 14595.354 - 14652.590: 67.4620% ( 51) 00:09:44.462 14652.590 - 14767.064: 68.5600% ( 104) 00:09:44.462 14767.064 - 14881.537: 69.3940% ( 79) 00:09:44.462 14881.537 - 14996.010: 70.5342% ( 108) 00:09:44.462 14996.010 - 15110.484: 71.7483% ( 115) 00:09:44.462 15110.484 - 15224.957: 72.8674% ( 106) 00:09:44.462 15224.957 - 15339.431: 73.8492% ( 93) 00:09:44.462 15339.431 - 15453.904: 75.0845% ( 117) 00:09:44.462 15453.904 - 15568.377: 76.3725% ( 122) 00:09:44.462 15568.377 - 15682.851: 77.6816% ( 124) 00:09:44.462 15682.851 - 15797.324: 79.3180% ( 155) 00:09:44.462 15797.324 - 15911.797: 80.5321% ( 115) 00:09:44.462 15911.797 - 16026.271: 81.5456% ( 96) 00:09:44.462 16026.271 - 16140.744: 82.5169% ( 92) 00:09:44.462 16140.744 - 16255.217: 83.7204% ( 114) 00:09:44.462 16255.217 - 16369.691: 84.9873% ( 120) 00:09:44.462 16369.691 - 16484.164: 86.2965% ( 124) 00:09:44.462 16484.164 - 16598.638: 87.1622% ( 82) 00:09:44.462 16598.638 - 16713.111: 87.9012% ( 70) 00:09:44.462 16713.111 - 16827.584: 88.8936% ( 94) 00:09:44.462 16827.584 - 16942.058: 90.0866% ( 113) 00:09:44.462 16942.058 - 17056.531: 91.0051% ( 87) 00:09:44.462 17056.531 - 17171.004: 91.6702% ( 63) 00:09:44.462 17171.004 - 17285.478: 92.1981% ( 50) 00:09:44.462 17285.478 - 17399.951: 92.6837% ( 46) 00:09:44.462 17399.951 - 17514.424: 92.9793% ( 28) 00:09:44.462 17514.424 - 17628.898: 93.3383% ( 34) 00:09:44.462 17628.898 - 17743.371: 93.7183% ( 36) 00:09:44.462 17743.371 - 17857.845: 94.2568% ( 51) 00:09:44.462 17857.845 - 17972.318: 95.0697% ( 77) 00:09:44.462 17972.318 - 18086.791: 96.1677% ( 104) 00:09:44.462 18086.791 - 18201.265: 96.8856% ( 68) 00:09:44.462 18201.265 - 18315.738: 97.2867% ( 38) 00:09:44.462 18315.738 - 18430.211: 97.5823% ( 28) 00:09:44.462 18430.211 - 18544.685: 97.7829% ( 19) 00:09:44.462 18544.685 - 18659.158: 97.8674% ( 8) 00:09:44.462 18659.158 - 18773.631: 97.9624% ( 9) 00:09:44.462 18773.631 - 18888.105: 98.0574% ( 9) 00:09:44.462 18888.105 - 19002.578: 98.1313% ( 7) 00:09:44.462 19002.578 - 19117.052: 98.1841% ( 5) 00:09:44.462 19117.052 - 19231.525: 98.2369% ( 5) 00:09:44.462 19231.525 - 19345.998: 98.3742% ( 13) 00:09:44.462 19345.998 - 19460.472: 98.5536% ( 17) 00:09:44.462 19460.472 - 19574.945: 98.6486% ( 9) 00:09:44.462 26443.347 - 26557.820: 98.7014% ( 5) 00:09:44.462 26557.820 - 26672.293: 98.7753% ( 7) 00:09:44.462 26672.293 - 26786.767: 98.8492% ( 7) 00:09:44.462 26786.767 - 26901.240: 98.9231% ( 7) 00:09:44.462 26901.240 - 27015.714: 98.9970% ( 7) 00:09:44.462 27015.714 - 27130.187: 99.0709% ( 7) 00:09:44.462 27130.187 - 27244.660: 99.1343% ( 6) 00:09:44.462 27244.660 - 27359.134: 99.2082% ( 7) 00:09:44.462 27359.134 - 27473.607: 99.2821% ( 7) 00:09:44.462 27473.607 - 27588.080: 99.3243% ( 4) 00:09:44.462 37318.316 - 37547.263: 99.3771% ( 5) 00:09:44.462 37547.263 - 37776.210: 99.4510% ( 7) 00:09:44.462 37776.210 - 38005.156: 99.5249% ( 7) 00:09:44.462 38005.156 - 38234.103: 99.5988% ( 7) 00:09:44.462 38234.103 - 38463.050: 99.6833% ( 8) 00:09:44.462 38463.050 - 38691.997: 99.7677% ( 8) 00:09:44.462 38691.997 - 38920.943: 99.8522% ( 8) 00:09:44.462 38920.943 - 39149.890: 99.9472% ( 9) 00:09:44.462 39149.890 - 39378.837: 100.0000% ( 5) 00:09:44.462 00:09:44.462 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:44.462 ============================================================================== 00:09:44.462 Range in us Cumulative IO count 00:09:44.462 6524.982 - 6553.600: 0.0211% ( 2) 00:09:44.462 6553.600 - 6582.218: 0.0317% ( 1) 00:09:44.462 6582.218 - 6610.837: 0.0528% ( 2) 00:09:44.462 6610.837 - 6639.455: 0.0739% ( 2) 00:09:44.462 6639.455 - 6668.073: 0.0845% ( 1) 00:09:44.462 6668.073 - 6696.692: 0.1056% ( 2) 00:09:44.462 6696.692 - 6725.310: 0.1372% ( 3) 00:09:44.462 6725.310 - 6753.928: 0.1795% ( 4) 00:09:44.462 6753.928 - 6782.547: 0.2111% ( 3) 00:09:44.462 6782.547 - 6811.165: 0.2639% ( 5) 00:09:44.462 6811.165 - 6839.783: 0.3378% ( 7) 00:09:44.462 6839.783 - 6868.402: 0.4329% ( 9) 00:09:44.462 6868.402 - 6897.020: 0.4751% ( 4) 00:09:44.462 6897.020 - 6925.638: 0.5068% ( 3) 00:09:44.462 6925.638 - 6954.257: 0.5384% ( 3) 00:09:44.462 6954.257 - 6982.875: 0.5807% ( 4) 00:09:44.462 6982.875 - 7011.493: 0.6018% ( 2) 00:09:44.462 7011.493 - 7040.112: 0.6123% ( 1) 00:09:44.462 7040.112 - 7068.730: 0.6334% ( 2) 00:09:44.462 7068.730 - 7097.348: 0.6546% ( 2) 00:09:44.462 7097.348 - 7125.967: 0.6651% ( 1) 00:09:44.462 7125.967 - 7154.585: 0.6757% ( 1) 00:09:44.462 9386.816 - 9444.052: 0.6862% ( 1) 00:09:44.462 9501.289 - 9558.526: 0.7073% ( 2) 00:09:44.462 9558.526 - 9615.762: 0.7496% ( 4) 00:09:44.462 9615.762 - 9672.999: 0.8446% ( 9) 00:09:44.462 9672.999 - 9730.236: 0.9185% ( 7) 00:09:44.462 9730.236 - 9787.472: 1.0769% ( 15) 00:09:44.462 9787.472 - 9844.709: 1.1930% ( 11) 00:09:44.462 9844.709 - 9901.946: 1.3302% ( 13) 00:09:44.462 9901.946 - 9959.183: 1.4358% ( 10) 00:09:44.462 9959.183 - 10016.419: 1.6258% ( 18) 00:09:44.462 10016.419 - 10073.656: 2.1009% ( 45) 00:09:44.462 10073.656 - 10130.893: 2.4599% ( 34) 00:09:44.462 10130.893 - 10188.129: 2.8505% ( 37) 00:09:44.462 10188.129 - 10245.366: 3.4101% ( 53) 00:09:44.462 10245.366 - 10302.603: 3.8007% ( 37) 00:09:44.462 10302.603 - 10359.839: 4.3497% ( 52) 00:09:44.462 10359.839 - 10417.076: 4.9620% ( 58) 00:09:44.462 10417.076 - 10474.313: 5.7749% ( 77) 00:09:44.462 10474.313 - 10531.549: 6.8623% ( 103) 00:09:44.462 10531.549 - 10588.786: 8.0448% ( 112) 00:09:44.462 10588.786 - 10646.023: 9.3433% ( 123) 00:09:44.462 10646.023 - 10703.259: 10.8425% ( 142) 00:09:44.462 10703.259 - 10760.496: 12.5000% ( 157) 00:09:44.462 10760.496 - 10817.733: 14.0731% ( 149) 00:09:44.462 10817.733 - 10874.969: 16.2057% ( 202) 00:09:44.462 10874.969 - 10932.206: 18.3910% ( 207) 00:09:44.462 10932.206 - 10989.443: 20.6187% ( 211) 00:09:44.462 10989.443 - 11046.679: 22.8252% ( 209) 00:09:44.463 11046.679 - 11103.916: 25.1900% ( 224) 00:09:44.463 11103.916 - 11161.153: 27.2276% ( 193) 00:09:44.463 11161.153 - 11218.390: 28.7796% ( 147) 00:09:44.463 11218.390 - 11275.626: 29.9409% ( 110) 00:09:44.463 11275.626 - 11332.863: 31.2078% ( 120) 00:09:44.463 11332.863 - 11390.100: 32.3691% ( 110) 00:09:44.463 11390.100 - 11447.336: 33.4671% ( 104) 00:09:44.463 11447.336 - 11504.573: 34.4278% ( 91) 00:09:44.463 11504.573 - 11561.810: 35.3885% ( 91) 00:09:44.463 11561.810 - 11619.046: 36.1275% ( 70) 00:09:44.463 11619.046 - 11676.283: 36.8560% ( 69) 00:09:44.463 11676.283 - 11733.520: 37.5950% ( 70) 00:09:44.463 11733.520 - 11790.756: 38.2390% ( 61) 00:09:44.463 11790.756 - 11847.993: 38.8830% ( 61) 00:09:44.463 11847.993 - 11905.230: 39.5693% ( 65) 00:09:44.463 11905.230 - 11962.466: 40.3611% ( 75) 00:09:44.463 11962.466 - 12019.703: 40.8995% ( 51) 00:09:44.463 12019.703 - 12076.940: 41.5013% ( 57) 00:09:44.463 12076.940 - 12134.176: 42.1242% ( 59) 00:09:44.463 12134.176 - 12191.413: 42.8421% ( 68) 00:09:44.463 12191.413 - 12248.650: 43.6233% ( 74) 00:09:44.463 12248.650 - 12305.886: 44.5312% ( 86) 00:09:44.463 12305.886 - 12363.123: 45.4814% ( 90) 00:09:44.463 12363.123 - 12420.360: 46.2521% ( 73) 00:09:44.463 12420.360 - 12477.597: 47.1178% ( 82) 00:09:44.463 12477.597 - 12534.833: 47.9519% ( 79) 00:09:44.463 12534.833 - 12592.070: 48.8176% ( 82) 00:09:44.463 12592.070 - 12649.307: 49.4510% ( 60) 00:09:44.463 12649.307 - 12706.543: 49.9367% ( 46) 00:09:44.463 12706.543 - 12763.780: 50.4645% ( 50) 00:09:44.463 12763.780 - 12821.017: 50.9185% ( 43) 00:09:44.463 12821.017 - 12878.253: 51.2986% ( 36) 00:09:44.463 12878.253 - 12935.490: 51.7103% ( 39) 00:09:44.463 12935.490 - 12992.727: 52.1326% ( 40) 00:09:44.463 12992.727 - 13049.963: 52.4388% ( 29) 00:09:44.463 13049.963 - 13107.200: 52.8294% ( 37) 00:09:44.463 13107.200 - 13164.437: 53.3361% ( 48) 00:09:44.463 13164.437 - 13221.673: 53.7479% ( 39) 00:09:44.463 13221.673 - 13278.910: 54.1174% ( 35) 00:09:44.463 13278.910 - 13336.147: 54.5714% ( 43) 00:09:44.463 13336.147 - 13393.383: 55.0570% ( 46) 00:09:44.463 13393.383 - 13450.620: 55.5004% ( 42) 00:09:44.463 13450.620 - 13507.857: 55.8805% ( 36) 00:09:44.463 13507.857 - 13565.093: 56.2500% ( 35) 00:09:44.463 13565.093 - 13622.330: 56.5878% ( 32) 00:09:44.463 13622.330 - 13679.567: 57.1157% ( 50) 00:09:44.463 13679.567 - 13736.803: 57.6858% ( 54) 00:09:44.463 13736.803 - 13794.040: 58.2770% ( 56) 00:09:44.463 13794.040 - 13851.277: 58.8999% ( 59) 00:09:44.463 13851.277 - 13908.514: 59.8501% ( 90) 00:09:44.463 13908.514 - 13965.750: 60.8742% ( 97) 00:09:44.463 13965.750 - 14022.987: 61.7504% ( 83) 00:09:44.463 14022.987 - 14080.224: 62.5317% ( 74) 00:09:44.463 14080.224 - 14137.460: 63.1334% ( 57) 00:09:44.463 14137.460 - 14194.697: 63.6508% ( 49) 00:09:44.463 14194.697 - 14251.934: 64.1681% ( 49) 00:09:44.463 14251.934 - 14309.170: 64.6326% ( 44) 00:09:44.463 14309.170 - 14366.407: 65.0760% ( 42) 00:09:44.463 14366.407 - 14423.644: 65.5194% ( 42) 00:09:44.463 14423.644 - 14480.880: 65.9523% ( 41) 00:09:44.463 14480.880 - 14538.117: 66.4274% ( 45) 00:09:44.463 14538.117 - 14595.354: 67.0291% ( 57) 00:09:44.463 14595.354 - 14652.590: 67.6520% ( 59) 00:09:44.463 14652.590 - 14767.064: 69.1934% ( 146) 00:09:44.463 14767.064 - 14881.537: 70.4709% ( 121) 00:09:44.463 14881.537 - 14996.010: 71.5266% ( 100) 00:09:44.463 14996.010 - 15110.484: 72.4029% ( 83) 00:09:44.463 15110.484 - 15224.957: 73.5220% ( 106) 00:09:44.463 15224.957 - 15339.431: 74.8205% ( 123) 00:09:44.463 15339.431 - 15453.904: 76.3936% ( 149) 00:09:44.463 15453.904 - 15568.377: 77.1326% ( 70) 00:09:44.463 15568.377 - 15682.851: 78.0300% ( 85) 00:09:44.463 15682.851 - 15797.324: 79.0224% ( 94) 00:09:44.463 15797.324 - 15911.797: 80.3948% ( 130) 00:09:44.463 15911.797 - 16026.271: 81.4506% ( 100) 00:09:44.463 16026.271 - 16140.744: 82.8970% ( 137) 00:09:44.463 16140.744 - 16255.217: 83.6465% ( 71) 00:09:44.463 16255.217 - 16369.691: 84.2272% ( 55) 00:09:44.463 16369.691 - 16484.164: 85.0718% ( 80) 00:09:44.463 16484.164 - 16598.638: 85.8636% ( 75) 00:09:44.463 16598.638 - 16713.111: 86.6132% ( 71) 00:09:44.463 16713.111 - 16827.584: 87.3839% ( 73) 00:09:44.463 16827.584 - 16942.058: 88.3024% ( 87) 00:09:44.463 16942.058 - 17056.531: 89.5376% ( 117) 00:09:44.463 17056.531 - 17171.004: 90.1921% ( 62) 00:09:44.463 17171.004 - 17285.478: 90.7517% ( 53) 00:09:44.463 17285.478 - 17399.951: 91.3112% ( 53) 00:09:44.463 17399.951 - 17514.424: 92.1664% ( 81) 00:09:44.463 17514.424 - 17628.898: 92.7787% ( 58) 00:09:44.463 17628.898 - 17743.371: 93.2327% ( 43) 00:09:44.463 17743.371 - 17857.845: 93.5389% ( 29) 00:09:44.463 17857.845 - 17972.318: 93.8556% ( 30) 00:09:44.463 17972.318 - 18086.791: 94.2462% ( 37) 00:09:44.463 18086.791 - 18201.265: 94.8585% ( 58) 00:09:44.463 18201.265 - 18315.738: 95.8404% ( 93) 00:09:44.463 18315.738 - 18430.211: 96.6005% ( 72) 00:09:44.463 18430.211 - 18544.685: 97.2340% ( 60) 00:09:44.463 18544.685 - 18659.158: 97.8146% ( 55) 00:09:44.463 18659.158 - 18773.631: 98.3742% ( 53) 00:09:44.463 18773.631 - 18888.105: 98.4797% ( 10) 00:09:44.463 18888.105 - 19002.578: 98.5220% ( 4) 00:09:44.463 19002.578 - 19117.052: 98.5431% ( 2) 00:09:44.463 19117.052 - 19231.525: 98.5747% ( 3) 00:09:44.463 19231.525 - 19345.998: 98.5959% ( 2) 00:09:44.463 19345.998 - 19460.472: 98.6170% ( 2) 00:09:44.463 19460.472 - 19574.945: 98.6381% ( 2) 00:09:44.463 19574.945 - 19689.418: 98.6486% ( 1) 00:09:44.463 26443.347 - 26557.820: 98.6803% ( 3) 00:09:44.463 26557.820 - 26672.293: 98.7014% ( 2) 00:09:44.463 26672.293 - 26786.767: 98.7331% ( 3) 00:09:44.463 26786.767 - 26901.240: 98.7648% ( 3) 00:09:44.463 26901.240 - 27015.714: 98.7965% ( 3) 00:09:44.463 27015.714 - 27130.187: 98.8281% ( 3) 00:09:44.463 27130.187 - 27244.660: 98.8704% ( 4) 00:09:44.463 27244.660 - 27359.134: 98.9020% ( 3) 00:09:44.463 27359.134 - 27473.607: 98.9443% ( 4) 00:09:44.463 27473.607 - 27588.080: 98.9759% ( 3) 00:09:44.463 27588.080 - 27702.554: 99.0182% ( 4) 00:09:44.463 27702.554 - 27817.027: 99.0498% ( 3) 00:09:44.463 27817.027 - 27931.500: 99.0815% ( 3) 00:09:44.463 27931.500 - 28045.974: 99.1132% ( 3) 00:09:44.463 28045.974 - 28160.447: 99.1554% ( 4) 00:09:44.463 28160.447 - 28274.921: 99.1871% ( 3) 00:09:44.463 28274.921 - 28389.394: 99.2293% ( 4) 00:09:44.463 28389.394 - 28503.867: 99.2715% ( 4) 00:09:44.463 28503.867 - 28618.341: 99.3032% ( 3) 00:09:44.463 28618.341 - 28732.814: 99.3243% ( 2) 00:09:44.463 37547.263 - 37776.210: 99.3349% ( 1) 00:09:44.463 37776.210 - 38005.156: 99.4299% ( 9) 00:09:44.463 38005.156 - 38234.103: 99.5249% ( 9) 00:09:44.463 38234.103 - 38463.050: 99.5988% ( 7) 00:09:44.463 38463.050 - 38691.997: 99.6938% ( 9) 00:09:44.463 38691.997 - 38920.943: 99.7783% ( 8) 00:09:44.463 38920.943 - 39149.890: 99.8733% ( 9) 00:09:44.463 39149.890 - 39378.837: 99.9578% ( 8) 00:09:44.463 39378.837 - 39607.783: 100.0000% ( 4) 00:09:44.463 00:09:44.463 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:44.463 ============================================================================== 00:09:44.463 Range in us Cumulative IO count 00:09:44.463 6067.088 - 6095.707: 0.0211% ( 2) 00:09:44.463 6095.707 - 6124.325: 0.0317% ( 1) 00:09:44.463 6124.325 - 6152.943: 0.0739% ( 4) 00:09:44.463 6152.943 - 6181.562: 0.1161% ( 4) 00:09:44.463 6181.562 - 6210.180: 0.1584% ( 4) 00:09:44.463 6210.180 - 6238.798: 0.2006% ( 4) 00:09:44.463 6238.798 - 6267.417: 0.2323% ( 3) 00:09:44.463 6267.417 - 6296.035: 0.3062% ( 7) 00:09:44.463 6296.035 - 6324.653: 0.4329% ( 12) 00:09:44.463 6324.653 - 6353.272: 0.4645% ( 3) 00:09:44.463 6353.272 - 6381.890: 0.4962% ( 3) 00:09:44.463 6381.890 - 6410.508: 0.5279% ( 3) 00:09:44.463 6410.508 - 6439.127: 0.5595% ( 3) 00:09:44.463 6439.127 - 6467.745: 0.5701% ( 1) 00:09:44.463 6467.745 - 6496.363: 0.5912% ( 2) 00:09:44.463 6496.363 - 6524.982: 0.6123% ( 2) 00:09:44.463 6524.982 - 6553.600: 0.6229% ( 1) 00:09:44.463 6553.600 - 6582.218: 0.6440% ( 2) 00:09:44.463 6582.218 - 6610.837: 0.6546% ( 1) 00:09:44.463 6610.837 - 6639.455: 0.6757% ( 2) 00:09:44.463 9615.762 - 9672.999: 0.6862% ( 1) 00:09:44.463 9672.999 - 9730.236: 0.7073% ( 2) 00:09:44.463 9730.236 - 9787.472: 0.7707% ( 6) 00:09:44.463 9787.472 - 9844.709: 1.0452% ( 26) 00:09:44.463 9844.709 - 9901.946: 1.4041% ( 34) 00:09:44.463 9901.946 - 9959.183: 1.6047% ( 19) 00:09:44.463 9959.183 - 10016.419: 2.0376% ( 41) 00:09:44.463 10016.419 - 10073.656: 2.2487% ( 20) 00:09:44.463 10073.656 - 10130.893: 2.4704% ( 21) 00:09:44.463 10130.893 - 10188.129: 2.6605% ( 18) 00:09:44.463 10188.129 - 10245.366: 2.8611% ( 19) 00:09:44.463 10245.366 - 10302.603: 3.0617% ( 19) 00:09:44.463 10302.603 - 10359.839: 3.3467% ( 27) 00:09:44.463 10359.839 - 10417.076: 3.9168% ( 54) 00:09:44.463 10417.076 - 10474.313: 4.5925% ( 64) 00:09:44.463 10474.313 - 10531.549: 5.6166% ( 97) 00:09:44.463 10531.549 - 10588.786: 6.7462% ( 107) 00:09:44.463 10588.786 - 10646.023: 7.9814% ( 117) 00:09:44.463 10646.023 - 10703.259: 9.9029% ( 182) 00:09:44.464 10703.259 - 10760.496: 12.6689% ( 262) 00:09:44.464 10760.496 - 10817.733: 14.6854% ( 191) 00:09:44.464 10817.733 - 10874.969: 16.7441% ( 195) 00:09:44.464 10874.969 - 10932.206: 18.6233% ( 178) 00:09:44.464 10932.206 - 10989.443: 20.4392% ( 172) 00:09:44.464 10989.443 - 11046.679: 22.2234% ( 169) 00:09:44.464 11046.679 - 11103.916: 23.8809% ( 157) 00:09:44.464 11103.916 - 11161.153: 25.3695% ( 141) 00:09:44.464 11161.153 - 11218.390: 26.7631% ( 132) 00:09:44.464 11218.390 - 11275.626: 28.0617% ( 123) 00:09:44.464 11275.626 - 11332.863: 29.6030% ( 146) 00:09:44.464 11332.863 - 11390.100: 31.1128% ( 143) 00:09:44.464 11390.100 - 11447.336: 32.4535% ( 127) 00:09:44.464 11447.336 - 11504.573: 33.5198% ( 101) 00:09:44.464 11504.573 - 11561.810: 34.6812% ( 110) 00:09:44.464 11561.810 - 11619.046: 35.9481% ( 120) 00:09:44.464 11619.046 - 11676.283: 36.8243% ( 83) 00:09:44.464 11676.283 - 11733.520: 37.6689% ( 80) 00:09:44.464 11733.520 - 11790.756: 38.3657% ( 66) 00:09:44.464 11790.756 - 11847.993: 39.1892% ( 78) 00:09:44.464 11847.993 - 11905.230: 39.9177% ( 69) 00:09:44.464 11905.230 - 11962.466: 40.6778% ( 72) 00:09:44.464 11962.466 - 12019.703: 41.4907% ( 77) 00:09:44.464 12019.703 - 12076.940: 42.5359% ( 99) 00:09:44.464 12076.940 - 12134.176: 43.2644% ( 69) 00:09:44.464 12134.176 - 12191.413: 43.9295% ( 63) 00:09:44.464 12191.413 - 12248.650: 44.6368% ( 67) 00:09:44.464 12248.650 - 12305.886: 45.1647% ( 50) 00:09:44.464 12305.886 - 12363.123: 45.7770% ( 58) 00:09:44.464 12363.123 - 12420.360: 46.6216% ( 80) 00:09:44.464 12420.360 - 12477.597: 47.3818% ( 72) 00:09:44.464 12477.597 - 12534.833: 48.1736% ( 75) 00:09:44.464 12534.833 - 12592.070: 48.7542% ( 55) 00:09:44.464 12592.070 - 12649.307: 49.1976% ( 42) 00:09:44.464 12649.307 - 12706.543: 49.5671% ( 35) 00:09:44.464 12706.543 - 12763.780: 49.8944% ( 31) 00:09:44.464 12763.780 - 12821.017: 50.2217% ( 31) 00:09:44.464 12821.017 - 12878.253: 50.5490% ( 31) 00:09:44.464 12878.253 - 12935.490: 50.8552% ( 29) 00:09:44.464 12935.490 - 12992.727: 51.1508% ( 28) 00:09:44.464 12992.727 - 13049.963: 51.4253% ( 26) 00:09:44.464 13049.963 - 13107.200: 51.6364% ( 20) 00:09:44.464 13107.200 - 13164.437: 52.2382% ( 57) 00:09:44.464 13164.437 - 13221.673: 52.6394% ( 38) 00:09:44.464 13221.673 - 13278.910: 53.1461% ( 48) 00:09:44.464 13278.910 - 13336.147: 53.7584% ( 58) 00:09:44.464 13336.147 - 13393.383: 54.1913% ( 41) 00:09:44.464 13393.383 - 13450.620: 54.6769% ( 46) 00:09:44.464 13450.620 - 13507.857: 55.3737% ( 66) 00:09:44.464 13507.857 - 13565.093: 56.0705% ( 66) 00:09:44.464 13565.093 - 13622.330: 56.9362% ( 82) 00:09:44.464 13622.330 - 13679.567: 57.8758% ( 89) 00:09:44.464 13679.567 - 13736.803: 58.7627% ( 84) 00:09:44.464 13736.803 - 13794.040: 59.4383% ( 64) 00:09:44.464 13794.040 - 13851.277: 60.3146% ( 83) 00:09:44.464 13851.277 - 13908.514: 61.0536% ( 70) 00:09:44.464 13908.514 - 13965.750: 61.7504% ( 66) 00:09:44.464 13965.750 - 14022.987: 62.5422% ( 75) 00:09:44.464 14022.987 - 14080.224: 63.3657% ( 78) 00:09:44.464 14080.224 - 14137.460: 63.9569% ( 56) 00:09:44.464 14137.460 - 14194.697: 64.5059% ( 52) 00:09:44.464 14194.697 - 14251.934: 65.0443% ( 51) 00:09:44.464 14251.934 - 14309.170: 65.4772% ( 41) 00:09:44.464 14309.170 - 14366.407: 65.9206% ( 42) 00:09:44.464 14366.407 - 14423.644: 66.4062% ( 46) 00:09:44.464 14423.644 - 14480.880: 66.8708% ( 44) 00:09:44.464 14480.880 - 14538.117: 67.5992% ( 69) 00:09:44.464 14538.117 - 14595.354: 68.0849% ( 46) 00:09:44.464 14595.354 - 14652.590: 68.4861% ( 38) 00:09:44.464 14652.590 - 14767.064: 69.4151% ( 88) 00:09:44.464 14767.064 - 14881.537: 70.4920% ( 102) 00:09:44.464 14881.537 - 14996.010: 71.2521% ( 72) 00:09:44.464 14996.010 - 15110.484: 72.0122% ( 72) 00:09:44.464 15110.484 - 15224.957: 72.9519% ( 89) 00:09:44.464 15224.957 - 15339.431: 74.0921% ( 108) 00:09:44.464 15339.431 - 15453.904: 74.8522% ( 72) 00:09:44.464 15453.904 - 15568.377: 75.6968% ( 80) 00:09:44.464 15568.377 - 15682.851: 76.7314% ( 98) 00:09:44.464 15682.851 - 15797.324: 77.7872% ( 100) 00:09:44.464 15797.324 - 15911.797: 79.7508% ( 186) 00:09:44.464 15911.797 - 16026.271: 81.2500% ( 142) 00:09:44.464 16026.271 - 16140.744: 82.3480% ( 104) 00:09:44.464 16140.744 - 16255.217: 83.1398% ( 75) 00:09:44.464 16255.217 - 16369.691: 83.9527% ( 77) 00:09:44.464 16369.691 - 16484.164: 84.6812% ( 69) 00:09:44.464 16484.164 - 16598.638: 85.6419% ( 91) 00:09:44.464 16598.638 - 16713.111: 86.6871% ( 99) 00:09:44.464 16713.111 - 16827.584: 87.6161% ( 88) 00:09:44.464 16827.584 - 16942.058: 88.9569% ( 127) 00:09:44.464 16942.058 - 17056.531: 89.9493% ( 94) 00:09:44.464 17056.531 - 17171.004: 90.8256% ( 83) 00:09:44.464 17171.004 - 17285.478: 91.5857% ( 72) 00:09:44.464 17285.478 - 17399.951: 92.4514% ( 82) 00:09:44.464 17399.951 - 17514.424: 93.3805% ( 88) 00:09:44.464 17514.424 - 17628.898: 94.2779% ( 85) 00:09:44.464 17628.898 - 17743.371: 95.0486% ( 73) 00:09:44.464 17743.371 - 17857.845: 95.6609% ( 58) 00:09:44.464 17857.845 - 17972.318: 96.1571% ( 47) 00:09:44.464 17972.318 - 18086.791: 96.7061% ( 52) 00:09:44.464 18086.791 - 18201.265: 97.1178% ( 39) 00:09:44.464 18201.265 - 18315.738: 97.4451% ( 31) 00:09:44.464 18315.738 - 18430.211: 97.9202% ( 45) 00:09:44.464 18430.211 - 18544.685: 98.3003% ( 36) 00:09:44.464 18544.685 - 18659.158: 98.5114% ( 20) 00:09:44.464 18659.158 - 18773.631: 98.6064% ( 9) 00:09:44.464 18773.631 - 18888.105: 98.6486% ( 4) 00:09:44.464 27015.714 - 27130.187: 98.6909% ( 4) 00:09:44.464 27130.187 - 27244.660: 98.7226% ( 3) 00:09:44.464 27244.660 - 27359.134: 98.7648% ( 4) 00:09:44.464 27359.134 - 27473.607: 98.7965% ( 3) 00:09:44.464 27473.607 - 27588.080: 98.8387% ( 4) 00:09:44.464 27588.080 - 27702.554: 98.8598% ( 2) 00:09:44.464 27702.554 - 27817.027: 98.9020% ( 4) 00:09:44.464 27817.027 - 27931.500: 98.9337% ( 3) 00:09:44.464 27931.500 - 28045.974: 98.9759% ( 4) 00:09:44.464 28045.974 - 28160.447: 99.0076% ( 3) 00:09:44.464 28160.447 - 28274.921: 99.0393% ( 3) 00:09:44.464 28274.921 - 28389.394: 99.0815% ( 4) 00:09:44.464 28389.394 - 28503.867: 99.1132% ( 3) 00:09:44.464 28503.867 - 28618.341: 99.1448% ( 3) 00:09:44.464 28618.341 - 28732.814: 99.1871% ( 4) 00:09:44.464 28732.814 - 28847.287: 99.2293% ( 4) 00:09:44.464 28847.287 - 28961.761: 99.2715% ( 4) 00:09:44.464 28961.761 - 29076.234: 99.2927% ( 2) 00:09:44.464 29076.234 - 29190.707: 99.3243% ( 3) 00:09:44.464 37776.210 - 38005.156: 99.4088% ( 8) 00:09:44.464 38005.156 - 38234.103: 99.4932% ( 8) 00:09:44.464 38234.103 - 38463.050: 99.5671% ( 7) 00:09:44.464 38463.050 - 38691.997: 99.6622% ( 9) 00:09:44.464 38691.997 - 38920.943: 99.7466% ( 8) 00:09:44.464 38920.943 - 39149.890: 99.8311% ( 8) 00:09:44.464 39149.890 - 39378.837: 99.9261% ( 9) 00:09:44.464 39378.837 - 39607.783: 100.0000% ( 7) 00:09:44.464 00:09:44.464 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:44.464 ============================================================================== 00:09:44.464 Range in us Cumulative IO count 00:09:44.464 5923.997 - 5952.615: 0.0210% ( 2) 00:09:44.464 5952.615 - 5981.233: 0.1154% ( 9) 00:09:44.464 5981.233 - 6009.852: 0.2202% ( 10) 00:09:44.464 6009.852 - 6038.470: 0.3461% ( 12) 00:09:44.464 6038.470 - 6067.088: 0.4509% ( 10) 00:09:44.465 6067.088 - 6095.707: 0.5243% ( 7) 00:09:44.465 6095.707 - 6124.325: 0.5348% ( 1) 00:09:44.465 6124.325 - 6152.943: 0.5453% ( 1) 00:09:44.465 6152.943 - 6181.562: 0.5663% ( 2) 00:09:44.465 6181.562 - 6210.180: 0.5768% ( 1) 00:09:44.465 6210.180 - 6238.798: 0.5977% ( 2) 00:09:44.465 6238.798 - 6267.417: 0.6187% ( 2) 00:09:44.465 6267.417 - 6296.035: 0.6292% ( 1) 00:09:44.465 6296.035 - 6324.653: 0.6397% ( 1) 00:09:44.465 6324.653 - 6353.272: 0.6607% ( 2) 00:09:44.465 6353.272 - 6381.890: 0.6711% ( 1) 00:09:44.465 9329.579 - 9386.816: 0.6921% ( 2) 00:09:44.465 9386.816 - 9444.052: 0.7236% ( 3) 00:09:44.465 9444.052 - 9501.289: 0.8284% ( 10) 00:09:44.465 9501.289 - 9558.526: 0.9228% ( 9) 00:09:44.465 9558.526 - 9615.762: 1.0906% ( 16) 00:09:44.465 9615.762 - 9672.999: 1.1430% ( 5) 00:09:44.465 9672.999 - 9730.236: 1.1955% ( 5) 00:09:44.465 9730.236 - 9787.472: 1.2269% ( 3) 00:09:44.465 9787.472 - 9844.709: 1.2794% ( 5) 00:09:44.465 9844.709 - 9901.946: 1.3003% ( 2) 00:09:44.465 9901.946 - 9959.183: 1.3947% ( 9) 00:09:44.465 9959.183 - 10016.419: 1.5520% ( 15) 00:09:44.465 10016.419 - 10073.656: 1.7408% ( 18) 00:09:44.465 10073.656 - 10130.893: 1.9190% ( 17) 00:09:44.465 10130.893 - 10188.129: 2.3490% ( 41) 00:09:44.465 10188.129 - 10245.366: 2.6846% ( 32) 00:09:44.465 10245.366 - 10302.603: 3.0621% ( 36) 00:09:44.465 10302.603 - 10359.839: 3.5654% ( 48) 00:09:44.465 10359.839 - 10417.076: 4.3414% ( 74) 00:09:44.465 10417.076 - 10474.313: 5.5789% ( 118) 00:09:44.465 10474.313 - 10531.549: 6.5856% ( 96) 00:09:44.465 10531.549 - 10588.786: 7.8230% ( 118) 00:09:44.465 10588.786 - 10646.023: 9.1023% ( 122) 00:09:44.465 10646.023 - 10703.259: 10.5495% ( 138) 00:09:44.465 10703.259 - 10760.496: 12.4790% ( 184) 00:09:44.465 10760.496 - 10817.733: 14.5029% ( 193) 00:09:44.465 10817.733 - 10874.969: 16.7471% ( 214) 00:09:44.465 10874.969 - 10932.206: 18.6766% ( 184) 00:09:44.465 10932.206 - 10989.443: 20.7424% ( 197) 00:09:44.465 10989.443 - 11046.679: 22.6091% ( 178) 00:09:44.465 11046.679 - 11103.916: 24.3498% ( 166) 00:09:44.465 11103.916 - 11161.153: 25.9753% ( 155) 00:09:44.465 11161.153 - 11218.390: 27.3595% ( 132) 00:09:44.465 11218.390 - 11275.626: 28.7018% ( 128) 00:09:44.465 11275.626 - 11332.863: 30.0860% ( 132) 00:09:44.465 11332.863 - 11390.100: 31.2710% ( 113) 00:09:44.465 11390.100 - 11447.336: 32.2253% ( 91) 00:09:44.465 11447.336 - 11504.573: 32.9174% ( 66) 00:09:44.465 11504.573 - 11561.810: 34.0185% ( 105) 00:09:44.465 11561.810 - 11619.046: 34.8679% ( 81) 00:09:44.465 11619.046 - 11676.283: 35.5600% ( 66) 00:09:44.465 11676.283 - 11733.520: 36.4513% ( 85) 00:09:44.465 11733.520 - 11790.756: 37.4895% ( 99) 00:09:44.465 11790.756 - 11847.993: 38.6430% ( 110) 00:09:44.465 11847.993 - 11905.230: 39.5449% ( 86) 00:09:44.465 11905.230 - 11962.466: 40.6879% ( 109) 00:09:44.465 11962.466 - 12019.703: 41.8414% ( 110) 00:09:44.465 12019.703 - 12076.940: 42.6594% ( 78) 00:09:44.465 12076.940 - 12134.176: 43.4144% ( 72) 00:09:44.465 12134.176 - 12191.413: 44.2638% ( 81) 00:09:44.465 12191.413 - 12248.650: 44.9979% ( 70) 00:09:44.465 12248.650 - 12305.886: 45.7424% ( 71) 00:09:44.465 12305.886 - 12363.123: 46.4031% ( 63) 00:09:44.465 12363.123 - 12420.360: 46.9694% ( 54) 00:09:44.465 12420.360 - 12477.597: 47.3364% ( 35) 00:09:44.465 12477.597 - 12534.833: 47.7454% ( 39) 00:09:44.465 12534.833 - 12592.070: 48.0914% ( 33) 00:09:44.465 12592.070 - 12649.307: 48.4060% ( 30) 00:09:44.465 12649.307 - 12706.543: 48.6367% ( 22) 00:09:44.465 12706.543 - 12763.780: 48.8989% ( 25) 00:09:44.465 12763.780 - 12821.017: 49.1925% ( 28) 00:09:44.465 12821.017 - 12878.253: 49.4547% ( 25) 00:09:44.465 12878.253 - 12935.490: 49.7483% ( 28) 00:09:44.465 12935.490 - 12992.727: 50.1888% ( 42) 00:09:44.465 12992.727 - 13049.963: 50.5768% ( 37) 00:09:44.465 13049.963 - 13107.200: 50.9123% ( 32) 00:09:44.465 13107.200 - 13164.437: 51.2794% ( 35) 00:09:44.465 13164.437 - 13221.673: 51.8561% ( 55) 00:09:44.465 13221.673 - 13278.910: 52.2861% ( 41) 00:09:44.465 13278.910 - 13336.147: 52.9677% ( 65) 00:09:44.465 13336.147 - 13393.383: 53.6493% ( 65) 00:09:44.465 13393.383 - 13450.620: 54.2890% ( 61) 00:09:44.465 13450.620 - 13507.857: 54.8029% ( 49) 00:09:44.465 13507.857 - 13565.093: 55.1909% ( 37) 00:09:44.465 13565.093 - 13622.330: 55.9249% ( 70) 00:09:44.465 13622.330 - 13679.567: 56.6695% ( 71) 00:09:44.465 13679.567 - 13736.803: 57.3301% ( 63) 00:09:44.465 13736.803 - 13794.040: 57.8544% ( 50) 00:09:44.465 13794.040 - 13851.277: 58.5361% ( 65) 00:09:44.465 13851.277 - 13908.514: 59.1548% ( 59) 00:09:44.465 13908.514 - 13965.750: 59.7735% ( 59) 00:09:44.465 13965.750 - 14022.987: 60.3712% ( 57) 00:09:44.465 14022.987 - 14080.224: 61.0948% ( 69) 00:09:44.465 14080.224 - 14137.460: 61.7450% ( 62) 00:09:44.465 14137.460 - 14194.697: 62.5105% ( 73) 00:09:44.465 14194.697 - 14251.934: 63.4018% ( 85) 00:09:44.465 14251.934 - 14309.170: 64.2408% ( 80) 00:09:44.465 14309.170 - 14366.407: 65.1007% ( 82) 00:09:44.465 14366.407 - 14423.644: 66.1388% ( 99) 00:09:44.465 14423.644 - 14480.880: 66.9883% ( 81) 00:09:44.465 14480.880 - 14538.117: 67.7957% ( 77) 00:09:44.465 14538.117 - 14595.354: 68.3830% ( 56) 00:09:44.465 14595.354 - 14652.590: 68.8024% ( 40) 00:09:44.465 14652.590 - 14767.064: 69.7882% ( 94) 00:09:44.465 14767.064 - 14881.537: 70.6481% ( 82) 00:09:44.465 14881.537 - 14996.010: 71.7282% ( 103) 00:09:44.465 14996.010 - 15110.484: 72.7244% ( 95) 00:09:44.465 15110.484 - 15224.957: 73.5109% ( 75) 00:09:44.465 15224.957 - 15339.431: 74.3079% ( 76) 00:09:44.465 15339.431 - 15453.904: 75.2517% ( 90) 00:09:44.465 15453.904 - 15568.377: 76.4157% ( 111) 00:09:44.465 15568.377 - 15682.851: 77.7580% ( 128) 00:09:44.465 15682.851 - 15797.324: 78.8905% ( 108) 00:09:44.465 15797.324 - 15911.797: 79.8029% ( 87) 00:09:44.465 15911.797 - 16026.271: 80.8410% ( 99) 00:09:44.465 16026.271 - 16140.744: 81.9107% ( 102) 00:09:44.465 16140.744 - 16255.217: 82.7706% ( 82) 00:09:44.465 16255.217 - 16369.691: 83.6829% ( 87) 00:09:44.465 16369.691 - 16484.164: 84.7001% ( 97) 00:09:44.465 16484.164 - 16598.638: 85.5495% ( 81) 00:09:44.465 16598.638 - 16713.111: 86.3255% ( 74) 00:09:44.465 16713.111 - 16827.584: 87.3742% ( 100) 00:09:44.465 16827.584 - 16942.058: 88.3704% ( 95) 00:09:44.465 16942.058 - 17056.531: 89.6497% ( 122) 00:09:44.465 17056.531 - 17171.004: 90.6460% ( 95) 00:09:44.465 17171.004 - 17285.478: 91.8519% ( 115) 00:09:44.465 17285.478 - 17399.951: 92.9320% ( 103) 00:09:44.465 17399.951 - 17514.424: 93.5717% ( 61) 00:09:44.465 17514.424 - 17628.898: 94.1799% ( 58) 00:09:44.465 17628.898 - 17743.371: 94.7567% ( 55) 00:09:44.465 17743.371 - 17857.845: 95.2496% ( 47) 00:09:44.465 17857.845 - 17972.318: 95.6061% ( 34) 00:09:44.465 17972.318 - 18086.791: 95.8473% ( 23) 00:09:44.465 18086.791 - 18201.265: 96.0675% ( 21) 00:09:44.465 18201.265 - 18315.738: 96.3402% ( 26) 00:09:44.465 18315.738 - 18430.211: 96.5919% ( 24) 00:09:44.465 18430.211 - 18544.685: 97.0638% ( 45) 00:09:44.465 18544.685 - 18659.158: 97.6720% ( 58) 00:09:44.465 18659.158 - 18773.631: 98.0495% ( 36) 00:09:44.465 18773.631 - 18888.105: 98.4060% ( 34) 00:09:44.465 18888.105 - 19002.578: 98.5633% ( 15) 00:09:44.465 19002.578 - 19117.052: 98.7416% ( 17) 00:09:44.465 19117.052 - 19231.525: 98.8779% ( 13) 00:09:44.465 19231.525 - 19345.998: 99.0143% ( 13) 00:09:44.465 19345.998 - 19460.472: 99.0667% ( 5) 00:09:44.465 19460.472 - 19574.945: 99.1191% ( 5) 00:09:44.465 19574.945 - 19689.418: 99.1716% ( 5) 00:09:44.465 19689.418 - 19803.892: 99.2240% ( 5) 00:09:44.465 19803.892 - 19918.365: 99.2764% ( 5) 00:09:44.465 19918.365 - 20032.838: 99.3184% ( 4) 00:09:44.465 20032.838 - 20147.312: 99.3289% ( 1) 00:09:44.465 28045.974 - 28160.447: 99.3603% ( 3) 00:09:44.465 28160.447 - 28274.921: 99.3918% ( 3) 00:09:44.465 28274.921 - 28389.394: 99.4337% ( 4) 00:09:44.465 28389.394 - 28503.867: 99.4652% ( 3) 00:09:44.465 28503.867 - 28618.341: 99.5071% ( 4) 00:09:44.465 28618.341 - 28732.814: 99.5386% ( 3) 00:09:44.465 28732.814 - 28847.287: 99.5701% ( 3) 00:09:44.465 28847.287 - 28961.761: 99.6120% ( 4) 00:09:44.465 28961.761 - 29076.234: 99.6435% ( 3) 00:09:44.465 29076.234 - 29190.707: 99.6854% ( 4) 00:09:44.465 29190.707 - 29305.181: 99.7064% ( 2) 00:09:44.465 29305.181 - 29534.128: 99.7693% ( 6) 00:09:44.465 29534.128 - 29763.074: 99.8532% ( 8) 00:09:44.465 29763.074 - 29992.021: 99.9266% ( 7) 00:09:44.465 29992.021 - 30220.968: 100.0000% ( 7) 00:09:44.465 00:09:44.465 08:31:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:44.465 00:09:44.465 real 0m2.537s 00:09:44.465 user 0m2.196s 00:09:44.465 sys 0m0.236s 00:09:44.465 08:31:05 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.465 08:31:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 ************************************ 00:09:44.465 END TEST nvme_perf 00:09:44.465 ************************************ 00:09:44.465 08:31:06 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:44.465 08:31:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.465 08:31:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.465 08:31:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 ************************************ 00:09:44.465 START TEST nvme_hello_world 00:09:44.466 ************************************ 00:09:44.466 08:31:06 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:44.466 Initializing NVMe Controllers 00:09:44.466 Attached to 0000:00:10.0 00:09:44.466 Namespace ID: 1 size: 6GB 00:09:44.466 Attached to 0000:00:11.0 00:09:44.466 Namespace ID: 1 size: 5GB 00:09:44.466 Attached to 0000:00:13.0 00:09:44.466 Namespace ID: 1 size: 1GB 00:09:44.466 Attached to 0000:00:12.0 00:09:44.466 Namespace ID: 1 size: 4GB 00:09:44.466 Namespace ID: 2 size: 4GB 00:09:44.466 Namespace ID: 3 size: 4GB 00:09:44.466 Initialization complete. 00:09:44.466 INFO: using host memory buffer for IO 00:09:44.466 Hello world! 00:09:44.466 INFO: using host memory buffer for IO 00:09:44.466 Hello world! 00:09:44.466 INFO: using host memory buffer for IO 00:09:44.466 Hello world! 00:09:44.466 INFO: using host memory buffer for IO 00:09:44.466 Hello world! 00:09:44.466 INFO: using host memory buffer for IO 00:09:44.466 Hello world! 00:09:44.466 INFO: using host memory buffer for IO 00:09:44.466 Hello world! 00:09:44.466 00:09:44.466 real 0m0.244s 00:09:44.466 user 0m0.089s 00:09:44.466 sys 0m0.116s 00:09:44.466 08:31:06 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.466 08:31:06 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:44.466 ************************************ 00:09:44.466 END TEST nvme_hello_world 00:09:44.466 ************************************ 00:09:44.466 08:31:06 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:44.466 08:31:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.466 08:31:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.466 08:31:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.725 ************************************ 00:09:44.725 START TEST nvme_sgl 00:09:44.725 ************************************ 00:09:44.725 08:31:06 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:44.725 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:44.725 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:44.725 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:44.725 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:44.725 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:44.725 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:44.725 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:44.725 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:44.725 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:44.985 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:44.985 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:44.985 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:44.985 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:44.985 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:44.985 NVMe Readv/Writev Request test 00:09:44.985 Attached to 0000:00:10.0 00:09:44.985 Attached to 0000:00:11.0 00:09:44.985 Attached to 0000:00:13.0 00:09:44.985 Attached to 0000:00:12.0 00:09:44.985 0000:00:10.0: build_io_request_2 test passed 00:09:44.985 0000:00:10.0: build_io_request_4 test passed 00:09:44.985 0000:00:10.0: build_io_request_5 test passed 00:09:44.985 0000:00:10.0: build_io_request_6 test passed 00:09:44.985 0000:00:10.0: build_io_request_7 test passed 00:09:44.985 0000:00:10.0: build_io_request_10 test passed 00:09:44.985 0000:00:11.0: build_io_request_2 test passed 00:09:44.985 0000:00:11.0: build_io_request_4 test passed 00:09:44.985 0000:00:11.0: build_io_request_5 test passed 00:09:44.985 0000:00:11.0: build_io_request_6 test passed 00:09:44.985 0000:00:11.0: build_io_request_7 test passed 00:09:44.985 0000:00:11.0: build_io_request_10 test passed 00:09:44.985 Cleaning up... 00:09:44.985 00:09:44.985 real 0m0.304s 00:09:44.985 user 0m0.131s 00:09:44.985 sys 0m0.123s 00:09:44.985 08:31:06 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.985 08:31:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:44.985 ************************************ 00:09:44.985 END TEST nvme_sgl 00:09:44.985 ************************************ 00:09:44.985 08:31:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:44.985 08:31:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.985 08:31:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.985 08:31:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.985 ************************************ 00:09:44.985 START TEST nvme_e2edp 00:09:44.985 ************************************ 00:09:44.985 08:31:06 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:45.244 NVMe Write/Read with End-to-End data protection test 00:09:45.244 Attached to 0000:00:10.0 00:09:45.244 Attached to 0000:00:11.0 00:09:45.244 Attached to 0000:00:13.0 00:09:45.244 Attached to 0000:00:12.0 00:09:45.244 Cleaning up... 00:09:45.244 00:09:45.244 real 0m0.237s 00:09:45.244 user 0m0.075s 00:09:45.244 sys 0m0.113s 00:09:45.244 08:31:06 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.244 08:31:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:45.244 ************************************ 00:09:45.244 END TEST nvme_e2edp 00:09:45.244 ************************************ 00:09:45.244 08:31:07 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:45.244 08:31:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.244 08:31:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.244 08:31:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.244 ************************************ 00:09:45.244 START TEST nvme_reserve 00:09:45.244 ************************************ 00:09:45.244 08:31:07 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:45.505 ===================================================== 00:09:45.505 NVMe Controller at PCI bus 0, device 16, function 0 00:09:45.505 ===================================================== 00:09:45.505 Reservations: Not Supported 00:09:45.505 ===================================================== 00:09:45.505 NVMe Controller at PCI bus 0, device 17, function 0 00:09:45.505 ===================================================== 00:09:45.505 Reservations: Not Supported 00:09:45.505 ===================================================== 00:09:45.505 NVMe Controller at PCI bus 0, device 19, function 0 00:09:45.505 ===================================================== 00:09:45.505 Reservations: Not Supported 00:09:45.505 ===================================================== 00:09:45.505 NVMe Controller at PCI bus 0, device 18, function 0 00:09:45.505 ===================================================== 00:09:45.505 Reservations: Not Supported 00:09:45.505 Reservation test passed 00:09:45.505 00:09:45.505 real 0m0.235s 00:09:45.505 user 0m0.079s 00:09:45.505 sys 0m0.109s 00:09:45.505 08:31:07 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.505 08:31:07 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:45.505 ************************************ 00:09:45.505 END TEST nvme_reserve 00:09:45.505 ************************************ 00:09:45.505 08:31:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:45.505 08:31:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.505 08:31:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.505 08:31:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.505 ************************************ 00:09:45.505 START TEST nvme_err_injection 00:09:45.505 ************************************ 00:09:45.505 08:31:07 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:45.764 NVMe Error Injection test 00:09:45.764 Attached to 0000:00:10.0 00:09:45.764 Attached to 0000:00:11.0 00:09:45.764 Attached to 0000:00:13.0 00:09:45.764 Attached to 0000:00:12.0 00:09:45.764 0000:00:10.0: get features failed as expected 00:09:45.764 0000:00:11.0: get features failed as expected 00:09:45.764 0000:00:13.0: get features failed as expected 00:09:45.764 0000:00:12.0: get features failed as expected 00:09:45.764 0000:00:10.0: get features successfully as expected 00:09:45.764 0000:00:11.0: get features successfully as expected 00:09:45.764 0000:00:13.0: get features successfully as expected 00:09:45.764 0000:00:12.0: get features successfully as expected 00:09:45.764 0000:00:10.0: read failed as expected 00:09:45.764 0000:00:11.0: read failed as expected 00:09:45.764 0000:00:13.0: read failed as expected 00:09:45.764 0000:00:12.0: read failed as expected 00:09:45.764 0000:00:11.0: read successfully as expected 00:09:45.764 0000:00:10.0: read successfully as expected 00:09:45.764 0000:00:13.0: read successfully as expected 00:09:45.764 0000:00:12.0: read successfully as expected 00:09:45.764 Cleaning up... 00:09:45.764 00:09:45.764 real 0m0.232s 00:09:45.764 user 0m0.091s 00:09:45.764 sys 0m0.100s 00:09:45.764 08:31:07 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.764 08:31:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:45.764 ************************************ 00:09:45.764 END TEST nvme_err_injection 00:09:45.764 ************************************ 00:09:45.764 08:31:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:45.764 08:31:07 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:45.765 08:31:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.765 08:31:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 ************************************ 00:09:45.765 START TEST nvme_overhead 00:09:45.765 ************************************ 00:09:45.765 08:31:07 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:47.163 Initializing NVMe Controllers 00:09:47.163 Attached to 0000:00:10.0 00:09:47.163 Attached to 0000:00:11.0 00:09:47.163 Attached to 0000:00:13.0 00:09:47.163 Attached to 0000:00:12.0 00:09:47.163 Initialization complete. Launching workers. 00:09:47.163 submit (in ns) avg, min, max = 12529.2, 10086.5, 54641.0 00:09:47.163 complete (in ns) avg, min, max = 7526.5, 5668.1, 176476.9 00:09:47.163 00:09:47.163 Submit histogram 00:09:47.163 ================ 00:09:47.163 Range in us Cumulative Count 00:09:47.163 10.061 - 10.117: 0.0112% ( 1) 00:09:47.163 10.229 - 10.285: 0.0560% ( 4) 00:09:47.163 10.285 - 10.341: 0.1569% ( 9) 00:09:47.163 10.341 - 10.397: 0.2242% ( 6) 00:09:47.163 10.397 - 10.452: 0.4820% ( 23) 00:09:47.163 10.452 - 10.508: 0.8294% ( 31) 00:09:47.163 10.508 - 10.564: 1.2441% ( 37) 00:09:47.163 10.564 - 10.620: 1.9278% ( 61) 00:09:47.163 10.620 - 10.676: 2.6451% ( 64) 00:09:47.163 10.676 - 10.732: 3.3289% ( 61) 00:09:47.163 10.732 - 10.788: 4.2367% ( 81) 00:09:47.163 10.788 - 10.844: 5.2455% ( 90) 00:09:47.163 10.844 - 10.900: 6.3327% ( 97) 00:09:47.163 10.900 - 10.955: 7.3638% ( 92) 00:09:47.163 10.955 - 11.011: 8.4510% ( 97) 00:09:47.163 11.011 - 11.067: 9.4037% ( 85) 00:09:47.163 11.067 - 11.123: 10.2219% ( 73) 00:09:47.163 11.123 - 11.179: 11.1522% ( 83) 00:09:47.163 11.179 - 11.235: 12.1834% ( 92) 00:09:47.163 11.235 - 11.291: 13.2145% ( 92) 00:09:47.163 11.291 - 11.347: 14.6380% ( 127) 00:09:47.163 11.347 - 11.403: 16.2520% ( 144) 00:09:47.163 11.403 - 11.459: 18.0677% ( 162) 00:09:47.163 11.459 - 11.514: 20.1076% ( 182) 00:09:47.163 11.514 - 11.570: 22.5174% ( 215) 00:09:47.163 11.570 - 11.626: 25.6333% ( 278) 00:09:47.163 11.626 - 11.682: 28.9285% ( 294) 00:09:47.163 11.682 - 11.738: 31.8651% ( 262) 00:09:47.163 11.738 - 11.794: 34.9473% ( 275) 00:09:47.163 11.794 - 11.850: 38.2089% ( 291) 00:09:47.163 11.850 - 11.906: 41.2239% ( 269) 00:09:47.163 11.906 - 11.962: 44.2278% ( 268) 00:09:47.163 11.962 - 12.017: 47.4894% ( 291) 00:09:47.163 12.017 - 12.073: 50.0785% ( 231) 00:09:47.163 12.073 - 12.129: 53.2392% ( 282) 00:09:47.163 12.129 - 12.185: 56.3887% ( 281) 00:09:47.163 12.185 - 12.241: 59.3028% ( 260) 00:09:47.163 12.241 - 12.297: 62.0377% ( 244) 00:09:47.163 12.297 - 12.353: 64.6156% ( 230) 00:09:47.163 12.353 - 12.409: 66.9132% ( 205) 00:09:47.163 12.409 - 12.465: 69.3006% ( 213) 00:09:47.163 12.465 - 12.521: 71.5086% ( 197) 00:09:47.163 12.521 - 12.576: 73.1675% ( 148) 00:09:47.163 12.576 - 12.632: 74.9159% ( 156) 00:09:47.163 12.632 - 12.688: 76.3954% ( 132) 00:09:47.163 12.688 - 12.744: 77.6283% ( 110) 00:09:47.163 12.744 - 12.800: 78.7492% ( 100) 00:09:47.163 12.800 - 12.856: 79.8252% ( 96) 00:09:47.163 12.856 - 12.912: 80.9236% ( 98) 00:09:47.163 12.912 - 12.968: 82.0780% ( 103) 00:09:47.163 12.968 - 13.024: 83.0531% ( 87) 00:09:47.163 13.024 - 13.079: 83.8601% ( 72) 00:09:47.163 13.079 - 13.135: 84.4654% ( 54) 00:09:47.163 13.135 - 13.191: 85.2724% ( 72) 00:09:47.163 13.191 - 13.247: 85.9785% ( 63) 00:09:47.163 13.247 - 13.303: 86.6398% ( 59) 00:09:47.163 13.303 - 13.359: 87.1441% ( 45) 00:09:47.163 13.359 - 13.415: 87.5925% ( 40) 00:09:47.163 13.415 - 13.471: 88.0520% ( 41) 00:09:47.163 13.471 - 13.527: 88.3658% ( 28) 00:09:47.163 13.527 - 13.583: 88.5115% ( 13) 00:09:47.163 13.583 - 13.638: 88.6460% ( 12) 00:09:47.163 13.638 - 13.694: 88.8254% ( 16) 00:09:47.163 13.694 - 13.750: 89.0047% ( 16) 00:09:47.163 13.750 - 13.806: 89.2401% ( 21) 00:09:47.163 13.806 - 13.862: 89.3522% ( 10) 00:09:47.163 13.862 - 13.918: 89.4530% ( 9) 00:09:47.163 13.918 - 13.974: 89.5651% ( 10) 00:09:47.163 13.974 - 14.030: 89.6212% ( 5) 00:09:47.163 14.030 - 14.086: 89.6884% ( 6) 00:09:47.163 14.086 - 14.141: 89.8229% ( 12) 00:09:47.163 14.141 - 14.197: 89.9798% ( 14) 00:09:47.163 14.197 - 14.253: 90.2488% ( 24) 00:09:47.163 14.253 - 14.309: 90.4842% ( 21) 00:09:47.163 14.309 - 14.421: 91.3024% ( 73) 00:09:47.163 14.421 - 14.533: 92.1991% ( 80) 00:09:47.163 14.533 - 14.645: 92.6586% ( 41) 00:09:47.163 14.645 - 14.756: 93.1069% ( 40) 00:09:47.163 14.756 - 14.868: 93.3759% ( 24) 00:09:47.163 14.868 - 14.980: 93.6225% ( 22) 00:09:47.163 14.980 - 15.092: 93.8018% ( 16) 00:09:47.163 15.092 - 15.203: 93.9924% ( 17) 00:09:47.163 15.203 - 15.315: 94.1717% ( 16) 00:09:47.163 15.315 - 15.427: 94.3398% ( 15) 00:09:47.163 15.427 - 15.539: 94.5416% ( 18) 00:09:47.163 15.539 - 15.651: 94.6649% ( 11) 00:09:47.163 15.651 - 15.762: 94.7994% ( 12) 00:09:47.163 15.762 - 15.874: 94.8890% ( 8) 00:09:47.163 15.874 - 15.986: 95.0123% ( 11) 00:09:47.163 15.986 - 16.098: 95.1132% ( 9) 00:09:47.163 16.098 - 16.210: 95.2029% ( 8) 00:09:47.163 16.210 - 16.321: 95.2813% ( 7) 00:09:47.163 16.321 - 16.433: 95.4158% ( 12) 00:09:47.163 16.433 - 16.545: 95.5391% ( 11) 00:09:47.163 16.545 - 16.657: 95.7521% ( 19) 00:09:47.163 16.657 - 16.769: 95.8642% ( 10) 00:09:47.163 16.769 - 16.880: 96.0659% ( 18) 00:09:47.163 16.880 - 16.992: 96.2677% ( 18) 00:09:47.163 16.992 - 17.104: 96.4694% ( 18) 00:09:47.163 17.104 - 17.216: 96.5703% ( 9) 00:09:47.163 17.216 - 17.328: 96.6487% ( 7) 00:09:47.163 17.328 - 17.439: 96.7720% ( 11) 00:09:47.163 17.439 - 17.551: 96.9065% ( 12) 00:09:47.163 17.551 - 17.663: 97.0522% ( 13) 00:09:47.163 17.663 - 17.775: 97.1307% ( 7) 00:09:47.163 17.775 - 17.886: 97.2316% ( 9) 00:09:47.163 17.886 - 17.998: 97.3100% ( 7) 00:09:47.163 17.998 - 18.110: 97.3885% ( 7) 00:09:47.163 18.110 - 18.222: 97.5118% ( 11) 00:09:47.164 18.222 - 18.334: 97.5566% ( 4) 00:09:47.164 18.334 - 18.445: 97.6463% ( 8) 00:09:47.164 18.445 - 18.557: 97.7023% ( 5) 00:09:47.164 18.557 - 18.669: 97.7808% ( 7) 00:09:47.164 18.669 - 18.781: 97.9041% ( 11) 00:09:47.164 18.781 - 18.893: 97.9601% ( 5) 00:09:47.164 18.893 - 19.004: 98.0161% ( 5) 00:09:47.164 19.004 - 19.116: 98.0498% ( 3) 00:09:47.164 19.116 - 19.228: 98.0610% ( 1) 00:09:47.164 19.228 - 19.340: 98.1170% ( 5) 00:09:47.164 19.340 - 19.452: 98.1618% ( 4) 00:09:47.164 19.452 - 19.563: 98.1955% ( 3) 00:09:47.164 19.563 - 19.675: 98.2067% ( 1) 00:09:47.164 19.675 - 19.787: 98.2739% ( 6) 00:09:47.164 19.787 - 19.899: 98.3188% ( 4) 00:09:47.164 19.899 - 20.010: 98.3524% ( 3) 00:09:47.164 20.010 - 20.122: 98.3972% ( 4) 00:09:47.164 20.122 - 20.234: 98.4196% ( 2) 00:09:47.164 20.234 - 20.346: 98.4308% ( 1) 00:09:47.164 20.569 - 20.681: 98.4421% ( 1) 00:09:47.164 20.681 - 20.793: 98.4869% ( 4) 00:09:47.164 20.793 - 20.905: 98.5093% ( 2) 00:09:47.164 20.905 - 21.017: 98.5541% ( 4) 00:09:47.164 21.017 - 21.128: 98.5653% ( 1) 00:09:47.164 21.128 - 21.240: 98.5878% ( 2) 00:09:47.164 21.240 - 21.352: 98.5990% ( 1) 00:09:47.164 21.352 - 21.464: 98.6102% ( 1) 00:09:47.164 21.464 - 21.576: 98.6326% ( 2) 00:09:47.164 21.576 - 21.687: 98.6438% ( 1) 00:09:47.164 21.687 - 21.799: 98.6550% ( 1) 00:09:47.164 21.799 - 21.911: 98.6774% ( 2) 00:09:47.164 22.023 - 22.134: 98.6998% ( 2) 00:09:47.164 22.134 - 22.246: 98.7335% ( 3) 00:09:47.164 22.246 - 22.358: 98.7559% ( 2) 00:09:47.164 22.358 - 22.470: 98.7783% ( 2) 00:09:47.164 22.470 - 22.582: 98.8231% ( 4) 00:09:47.164 22.582 - 22.693: 98.8904% ( 6) 00:09:47.164 22.693 - 22.805: 98.9128% ( 2) 00:09:47.164 22.805 - 22.917: 98.9240% ( 1) 00:09:47.164 22.917 - 23.029: 98.9913% ( 6) 00:09:47.164 23.029 - 23.141: 99.0361% ( 4) 00:09:47.164 23.141 - 23.252: 99.0809% ( 4) 00:09:47.164 23.252 - 23.364: 99.1033% ( 2) 00:09:47.164 23.364 - 23.476: 99.1706% ( 6) 00:09:47.164 23.476 - 23.588: 99.2042% ( 3) 00:09:47.164 23.588 - 23.700: 99.2603% ( 5) 00:09:47.164 23.700 - 23.811: 99.3051% ( 4) 00:09:47.164 23.811 - 23.923: 99.3387% ( 3) 00:09:47.164 23.923 - 24.035: 99.3723% ( 3) 00:09:47.164 24.035 - 24.147: 99.4284% ( 5) 00:09:47.164 24.259 - 24.370: 99.4396% ( 1) 00:09:47.164 24.370 - 24.482: 99.4508% ( 1) 00:09:47.164 24.482 - 24.594: 99.4844% ( 3) 00:09:47.164 24.594 - 24.706: 99.4956% ( 1) 00:09:47.164 24.817 - 24.929: 99.5068% ( 1) 00:09:47.164 24.929 - 25.041: 99.5180% ( 1) 00:09:47.164 25.376 - 25.488: 99.5405% ( 2) 00:09:47.164 25.824 - 25.935: 99.5517% ( 1) 00:09:47.164 26.383 - 26.494: 99.5629% ( 1) 00:09:47.164 26.494 - 26.606: 99.5965% ( 3) 00:09:47.164 26.606 - 26.718: 99.6189% ( 2) 00:09:47.164 26.830 - 26.941: 99.6301% ( 1) 00:09:47.164 27.053 - 27.165: 99.6525% ( 2) 00:09:47.164 27.389 - 27.500: 99.6862% ( 3) 00:09:47.164 27.500 - 27.612: 99.6974% ( 1) 00:09:47.164 27.836 - 27.948: 99.7086% ( 1) 00:09:47.164 28.059 - 28.171: 99.7310% ( 2) 00:09:47.164 28.283 - 28.395: 99.7534% ( 2) 00:09:47.164 28.395 - 28.507: 99.7870% ( 3) 00:09:47.164 28.618 - 28.842: 99.7983% ( 1) 00:09:47.164 28.842 - 29.066: 99.8319% ( 3) 00:09:47.164 29.066 - 29.289: 99.8431% ( 1) 00:09:47.164 29.289 - 29.513: 99.8543% ( 1) 00:09:47.164 29.513 - 29.736: 99.8767% ( 2) 00:09:47.164 29.736 - 29.960: 99.8879% ( 1) 00:09:47.164 29.960 - 30.183: 99.8991% ( 1) 00:09:47.164 30.854 - 31.078: 99.9215% ( 2) 00:09:47.164 31.525 - 31.748: 99.9328% ( 1) 00:09:47.164 32.866 - 33.090: 99.9440% ( 1) 00:09:47.164 33.984 - 34.208: 99.9552% ( 1) 00:09:47.164 34.879 - 35.102: 99.9664% ( 1) 00:09:47.164 35.997 - 36.220: 99.9776% ( 1) 00:09:47.164 52.989 - 53.212: 99.9888% ( 1) 00:09:47.164 54.554 - 54.777: 100.0000% ( 1) 00:09:47.164 00:09:47.164 Complete histogram 00:09:47.164 ================== 00:09:47.164 Range in us Cumulative Count 00:09:47.164 5.645 - 5.673: 0.0112% ( 1) 00:09:47.164 5.673 - 5.701: 0.0672% ( 5) 00:09:47.164 5.701 - 5.729: 0.1905% ( 11) 00:09:47.164 5.729 - 5.757: 0.3811% ( 17) 00:09:47.164 5.757 - 5.785: 0.5044% ( 11) 00:09:47.164 5.785 - 5.813: 0.6613% ( 14) 00:09:47.164 5.813 - 5.841: 0.8855% ( 20) 00:09:47.164 5.841 - 5.869: 1.1881% ( 27) 00:09:47.164 5.869 - 5.897: 1.4907% ( 27) 00:09:47.164 5.897 - 5.925: 1.9390% ( 40) 00:09:47.164 5.925 - 5.953: 2.2529% ( 28) 00:09:47.164 5.953 - 5.981: 2.7236% ( 42) 00:09:47.164 5.981 - 6.009: 3.1383% ( 37) 00:09:47.164 6.009 - 6.037: 3.5082% ( 33) 00:09:47.164 6.037 - 6.065: 3.9117% ( 36) 00:09:47.164 6.065 - 6.093: 4.2479% ( 30) 00:09:47.164 6.093 - 6.121: 4.5618% ( 28) 00:09:47.164 6.121 - 6.148: 4.9653% ( 36) 00:09:47.164 6.148 - 6.176: 5.4920% ( 47) 00:09:47.164 6.176 - 6.204: 5.8843% ( 35) 00:09:47.164 6.204 - 6.232: 6.3663% ( 43) 00:09:47.164 6.232 - 6.260: 6.7810% ( 37) 00:09:47.164 6.260 - 6.288: 7.1957% ( 37) 00:09:47.164 6.288 - 6.316: 7.5319% ( 30) 00:09:47.164 6.316 - 6.344: 7.8234% ( 26) 00:09:47.164 6.344 - 6.372: 8.2829% ( 41) 00:09:47.164 6.372 - 6.400: 8.8209% ( 48) 00:09:47.164 6.400 - 6.428: 9.6391% ( 73) 00:09:47.164 6.428 - 6.456: 10.7039% ( 95) 00:09:47.164 6.456 - 6.484: 11.7014% ( 89) 00:09:47.164 6.484 - 6.512: 12.9567% ( 112) 00:09:47.164 6.512 - 6.540: 14.3914% ( 128) 00:09:47.164 6.540 - 6.568: 16.0054% ( 144) 00:09:47.164 6.568 - 6.596: 17.5297% ( 136) 00:09:47.164 6.596 - 6.624: 19.3679% ( 164) 00:09:47.164 6.624 - 6.652: 21.0827% ( 153) 00:09:47.164 6.652 - 6.679: 22.6967% ( 144) 00:09:47.164 6.679 - 6.707: 24.5573% ( 166) 00:09:47.164 6.707 - 6.735: 26.2833% ( 154) 00:09:47.164 6.735 - 6.763: 27.8861% ( 143) 00:09:47.164 6.763 - 6.791: 29.6682% ( 159) 00:09:47.164 6.791 - 6.819: 31.1701% ( 134) 00:09:47.164 6.819 - 6.847: 32.7393% ( 140) 00:09:47.164 6.847 - 6.875: 34.1067% ( 122) 00:09:47.164 6.875 - 6.903: 35.6871% ( 141) 00:09:47.164 6.903 - 6.931: 37.0208% ( 119) 00:09:47.164 6.931 - 6.959: 38.4219% ( 125) 00:09:47.164 6.959 - 6.987: 39.8229% ( 125) 00:09:47.164 6.987 - 7.015: 41.0334% ( 108) 00:09:47.164 7.015 - 7.043: 42.4681% ( 128) 00:09:47.164 7.043 - 7.071: 44.0933% ( 145) 00:09:47.164 7.071 - 7.099: 45.5391% ( 129) 00:09:47.164 7.099 - 7.127: 46.6151% ( 96) 00:09:47.164 7.127 - 7.155: 47.8368% ( 109) 00:09:47.164 7.155 - 7.210: 50.8182% ( 266) 00:09:47.164 7.210 - 7.266: 54.4833% ( 327) 00:09:47.164 7.266 - 7.322: 58.4174% ( 351) 00:09:47.164 7.322 - 7.378: 62.1610% ( 334) 00:09:47.164 7.378 - 7.434: 65.5010% ( 298) 00:09:47.164 7.434 - 7.490: 68.7626% ( 291) 00:09:47.164 7.490 - 7.546: 71.5086% ( 245) 00:09:47.164 7.546 - 7.602: 74.1426% ( 235) 00:09:47.164 7.602 - 7.658: 76.1152% ( 176) 00:09:47.164 7.658 - 7.714: 77.9534% ( 164) 00:09:47.164 7.714 - 7.769: 79.3208% ( 122) 00:09:47.164 7.769 - 7.825: 80.7779% ( 130) 00:09:47.164 7.825 - 7.881: 82.0332% ( 112) 00:09:47.164 7.881 - 7.937: 83.4454% ( 126) 00:09:47.164 7.937 - 7.993: 84.4430% ( 89) 00:09:47.164 7.993 - 8.049: 85.2275% ( 70) 00:09:47.164 8.049 - 8.105: 86.0345% ( 72) 00:09:47.164 8.105 - 8.161: 86.7743% ( 66) 00:09:47.164 8.161 - 8.217: 87.5140% ( 66) 00:09:47.164 8.217 - 8.272: 88.2874% ( 69) 00:09:47.164 8.272 - 8.328: 88.7805% ( 44) 00:09:47.164 8.328 - 8.384: 89.2737% ( 44) 00:09:47.164 8.384 - 8.440: 89.6436% ( 33) 00:09:47.164 8.440 - 8.496: 89.8453% ( 18) 00:09:47.164 8.496 - 8.552: 90.0247% ( 16) 00:09:47.164 8.552 - 8.608: 90.3161% ( 26) 00:09:47.164 8.608 - 8.664: 90.4394% ( 11) 00:09:47.164 8.664 - 8.720: 90.5739% ( 12) 00:09:47.164 8.720 - 8.776: 90.6859% ( 10) 00:09:47.164 8.776 - 8.831: 90.7868% ( 9) 00:09:47.164 8.831 - 8.887: 90.8653% ( 7) 00:09:47.164 8.887 - 8.943: 90.9325% ( 6) 00:09:47.164 8.943 - 8.999: 91.1679% ( 21) 00:09:47.164 8.999 - 9.055: 91.5490% ( 34) 00:09:47.164 9.055 - 9.111: 92.1879% ( 57) 00:09:47.164 9.111 - 9.167: 92.8603% ( 60) 00:09:47.164 9.167 - 9.223: 93.5328% ( 60) 00:09:47.164 9.223 - 9.279: 94.0820% ( 49) 00:09:47.164 9.279 - 9.334: 94.5864% ( 45) 00:09:47.165 9.334 - 9.390: 94.8554% ( 24) 00:09:47.165 9.390 - 9.446: 95.1692% ( 28) 00:09:47.165 9.446 - 9.502: 95.3710% ( 18) 00:09:47.165 9.502 - 9.558: 95.5391% ( 15) 00:09:47.165 9.558 - 9.614: 95.6960% ( 14) 00:09:47.165 9.614 - 9.670: 95.8754% ( 16) 00:09:47.165 9.670 - 9.726: 96.0435% ( 15) 00:09:47.165 9.726 - 9.782: 96.2116% ( 15) 00:09:47.165 9.782 - 9.838: 96.3685% ( 14) 00:09:47.165 9.838 - 9.893: 96.4806% ( 10) 00:09:47.165 9.893 - 9.949: 96.6151% ( 12) 00:09:47.165 9.949 - 10.005: 96.7496% ( 12) 00:09:47.165 10.005 - 10.061: 96.8169% ( 6) 00:09:47.165 10.061 - 10.117: 96.8505% ( 3) 00:09:47.165 10.117 - 10.173: 96.9177% ( 6) 00:09:47.165 10.173 - 10.229: 96.9626% ( 4) 00:09:47.165 10.229 - 10.285: 97.0074% ( 4) 00:09:47.165 10.285 - 10.341: 97.0186% ( 1) 00:09:47.165 10.341 - 10.397: 97.0410% ( 2) 00:09:47.165 10.397 - 10.452: 97.0634% ( 2) 00:09:47.165 10.452 - 10.508: 97.0971% ( 3) 00:09:47.165 10.508 - 10.564: 97.1195% ( 2) 00:09:47.165 10.564 - 10.620: 97.1419% ( 2) 00:09:47.165 10.620 - 10.676: 97.1531% ( 1) 00:09:47.165 10.676 - 10.732: 97.1755% ( 2) 00:09:47.165 10.732 - 10.788: 97.1867% ( 1) 00:09:47.165 10.900 - 10.955: 97.2428% ( 5) 00:09:47.165 10.955 - 11.011: 97.2540% ( 1) 00:09:47.165 11.067 - 11.123: 97.2764% ( 2) 00:09:47.165 11.123 - 11.179: 97.2876% ( 1) 00:09:47.165 11.179 - 11.235: 97.2988% ( 1) 00:09:47.165 11.235 - 11.291: 97.3100% ( 1) 00:09:47.165 11.347 - 11.403: 97.3324% ( 2) 00:09:47.165 11.403 - 11.459: 97.3436% ( 1) 00:09:47.165 11.459 - 11.514: 97.3549% ( 1) 00:09:47.165 11.514 - 11.570: 97.3661% ( 1) 00:09:47.165 11.626 - 11.682: 97.3885% ( 2) 00:09:47.165 11.682 - 11.738: 97.4333% ( 4) 00:09:47.165 11.850 - 11.906: 97.4781% ( 4) 00:09:47.165 11.906 - 11.962: 97.5566% ( 7) 00:09:47.165 12.017 - 12.073: 97.5790% ( 2) 00:09:47.165 12.129 - 12.185: 97.5902% ( 1) 00:09:47.165 12.185 - 12.241: 97.6014% ( 1) 00:09:47.165 12.241 - 12.297: 97.6239% ( 2) 00:09:47.165 12.297 - 12.353: 97.6911% ( 6) 00:09:47.165 12.353 - 12.409: 97.7135% ( 2) 00:09:47.165 12.409 - 12.465: 97.7359% ( 2) 00:09:47.165 12.465 - 12.521: 97.7471% ( 1) 00:09:47.165 12.521 - 12.576: 97.7920% ( 4) 00:09:47.165 12.576 - 12.632: 97.8144% ( 2) 00:09:47.165 12.632 - 12.688: 97.8480% ( 3) 00:09:47.165 12.688 - 12.744: 97.8816% ( 3) 00:09:47.165 12.744 - 12.800: 97.9153% ( 3) 00:09:47.165 12.800 - 12.856: 97.9825% ( 6) 00:09:47.165 12.856 - 12.912: 98.0049% ( 2) 00:09:47.165 12.912 - 12.968: 98.0386% ( 3) 00:09:47.165 12.968 - 13.024: 98.0834% ( 4) 00:09:47.165 13.024 - 13.079: 98.1170% ( 3) 00:09:47.165 13.079 - 13.135: 98.1394% ( 2) 00:09:47.165 13.135 - 13.191: 98.1506% ( 1) 00:09:47.165 13.191 - 13.247: 98.1618% ( 1) 00:09:47.165 13.247 - 13.303: 98.1955% ( 3) 00:09:47.165 13.415 - 13.471: 98.2179% ( 2) 00:09:47.165 13.471 - 13.527: 98.2291% ( 1) 00:09:47.165 13.527 - 13.583: 98.2627% ( 3) 00:09:47.165 13.583 - 13.638: 98.2851% ( 2) 00:09:47.165 13.638 - 13.694: 98.2963% ( 1) 00:09:47.165 13.750 - 13.806: 98.3300% ( 3) 00:09:47.165 13.806 - 13.862: 98.3524% ( 2) 00:09:47.165 13.918 - 13.974: 98.3636% ( 1) 00:09:47.165 13.974 - 14.030: 98.3748% ( 1) 00:09:47.165 14.086 - 14.141: 98.3860% ( 1) 00:09:47.165 14.197 - 14.253: 98.3972% ( 1) 00:09:47.165 14.253 - 14.309: 98.4196% ( 2) 00:09:47.165 14.309 - 14.421: 98.4308% ( 1) 00:09:47.165 14.421 - 14.533: 98.4421% ( 1) 00:09:47.165 14.533 - 14.645: 98.4757% ( 3) 00:09:47.165 14.645 - 14.756: 98.4869% ( 1) 00:09:47.165 14.756 - 14.868: 98.5093% ( 2) 00:09:47.165 14.868 - 14.980: 98.5205% ( 1) 00:09:47.165 14.980 - 15.092: 98.5429% ( 2) 00:09:47.165 15.092 - 15.203: 98.5653% ( 2) 00:09:47.165 15.315 - 15.427: 98.5878% ( 2) 00:09:47.165 15.427 - 15.539: 98.6214% ( 3) 00:09:47.165 15.651 - 15.762: 98.6438% ( 2) 00:09:47.165 16.769 - 16.880: 98.6774% ( 3) 00:09:47.165 16.992 - 17.104: 98.6886% ( 1) 00:09:47.165 17.104 - 17.216: 98.7111% ( 2) 00:09:47.165 17.216 - 17.328: 98.7559% ( 4) 00:09:47.165 17.328 - 17.439: 98.8119% ( 5) 00:09:47.165 17.439 - 17.551: 98.8792% ( 6) 00:09:47.165 17.551 - 17.663: 98.9352% ( 5) 00:09:47.165 17.663 - 17.775: 98.9800% ( 4) 00:09:47.165 17.775 - 17.886: 99.0137% ( 3) 00:09:47.165 17.886 - 17.998: 99.0585% ( 4) 00:09:47.165 17.998 - 18.110: 99.0809% ( 2) 00:09:47.165 18.110 - 18.222: 99.1370% ( 5) 00:09:47.165 18.222 - 18.334: 99.1706% ( 3) 00:09:47.165 18.334 - 18.445: 99.2266% ( 5) 00:09:47.165 18.445 - 18.557: 99.2603% ( 3) 00:09:47.165 18.557 - 18.669: 99.2715% ( 1) 00:09:47.165 18.669 - 18.781: 99.3163% ( 4) 00:09:47.165 18.781 - 18.893: 99.3499% ( 3) 00:09:47.165 18.893 - 19.004: 99.3723% ( 2) 00:09:47.165 19.004 - 19.116: 99.4508% ( 7) 00:09:47.165 19.116 - 19.228: 99.4732% ( 2) 00:09:47.165 19.228 - 19.340: 99.4956% ( 2) 00:09:47.165 19.340 - 19.452: 99.5068% ( 1) 00:09:47.165 19.452 - 19.563: 99.5293% ( 2) 00:09:47.165 19.563 - 19.675: 99.5517% ( 2) 00:09:47.165 20.122 - 20.234: 99.5741% ( 2) 00:09:47.165 20.234 - 20.346: 99.5853% ( 1) 00:09:47.165 20.458 - 20.569: 99.5965% ( 1) 00:09:47.165 20.681 - 20.793: 99.6077% ( 1) 00:09:47.165 20.793 - 20.905: 99.6189% ( 1) 00:09:47.165 20.905 - 21.017: 99.6301% ( 1) 00:09:47.165 21.017 - 21.128: 99.6413% ( 1) 00:09:47.165 21.352 - 21.464: 99.6525% ( 1) 00:09:47.165 21.799 - 21.911: 99.6638% ( 1) 00:09:47.165 22.023 - 22.134: 99.6750% ( 1) 00:09:47.165 22.470 - 22.582: 99.6974% ( 2) 00:09:47.165 22.582 - 22.693: 99.7086% ( 1) 00:09:47.165 22.805 - 22.917: 99.7198% ( 1) 00:09:47.165 22.917 - 23.029: 99.7310% ( 1) 00:09:47.165 23.141 - 23.252: 99.7422% ( 1) 00:09:47.165 23.364 - 23.476: 99.7534% ( 1) 00:09:47.165 23.476 - 23.588: 99.7758% ( 2) 00:09:47.165 23.700 - 23.811: 99.7870% ( 1) 00:09:47.165 23.811 - 23.923: 99.8095% ( 2) 00:09:47.165 24.035 - 24.147: 99.8543% ( 4) 00:09:47.165 24.147 - 24.259: 99.8655% ( 1) 00:09:47.165 24.370 - 24.482: 99.8767% ( 1) 00:09:47.165 24.929 - 25.041: 99.8879% ( 1) 00:09:47.165 25.712 - 25.824: 99.9103% ( 2) 00:09:47.165 26.047 - 26.159: 99.9215% ( 1) 00:09:47.165 26.830 - 26.941: 99.9328% ( 1) 00:09:47.165 29.513 - 29.736: 99.9440% ( 1) 00:09:47.165 29.960 - 30.183: 99.9552% ( 1) 00:09:47.165 33.984 - 34.208: 99.9664% ( 1) 00:09:47.165 35.549 - 35.773: 99.9776% ( 1) 00:09:47.165 43.151 - 43.375: 99.9888% ( 1) 00:09:47.165 176.182 - 177.076: 100.0000% ( 1) 00:09:47.165 00:09:47.165 00:09:47.165 real 0m1.225s 00:09:47.165 user 0m1.072s 00:09:47.165 sys 0m0.108s 00:09:47.165 08:31:08 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.165 08:31:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:47.165 ************************************ 00:09:47.165 END TEST nvme_overhead 00:09:47.165 ************************************ 00:09:47.165 08:31:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:47.165 08:31:08 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:47.165 08:31:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.165 08:31:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.165 ************************************ 00:09:47.165 START TEST nvme_arbitration 00:09:47.165 ************************************ 00:09:47.165 08:31:08 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:50.462 Initializing NVMe Controllers 00:09:50.462 Attached to 0000:00:10.0 00:09:50.462 Attached to 0000:00:11.0 00:09:50.462 Attached to 0000:00:13.0 00:09:50.462 Attached to 0000:00:12.0 00:09:50.462 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:50.462 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:50.462 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:50.462 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:50.462 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:50.462 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:50.462 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:50.462 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:50.462 Initialization complete. Launching workers. 00:09:50.462 Starting thread on core 1 with urgent priority queue 00:09:50.462 Starting thread on core 2 with urgent priority queue 00:09:50.462 Starting thread on core 3 with urgent priority queue 00:09:50.462 Starting thread on core 0 with urgent priority queue 00:09:50.462 QEMU NVMe Ctrl (12340 ) core 0: 3797.33 IO/s 26.33 secs/100000 ios 00:09:50.462 QEMU NVMe Ctrl (12342 ) core 0: 3797.33 IO/s 26.33 secs/100000 ios 00:09:50.462 QEMU NVMe Ctrl (12341 ) core 1: 3989.33 IO/s 25.07 secs/100000 ios 00:09:50.462 QEMU NVMe Ctrl (12342 ) core 1: 3989.33 IO/s 25.07 secs/100000 ios 00:09:50.462 QEMU NVMe Ctrl (12343 ) core 2: 4458.67 IO/s 22.43 secs/100000 ios 00:09:50.462 QEMU NVMe Ctrl (12342 ) core 3: 3989.33 IO/s 25.07 secs/100000 ios 00:09:50.462 ======================================================== 00:09:50.462 00:09:50.462 00:09:50.462 real 0m3.285s 00:09:50.462 user 0m9.051s 00:09:50.462 sys 0m0.144s 00:09:50.462 08:31:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.462 08:31:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:50.462 ************************************ 00:09:50.462 END TEST nvme_arbitration 00:09:50.462 ************************************ 00:09:50.462 08:31:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:50.462 08:31:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.462 08:31:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.462 08:31:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.462 ************************************ 00:09:50.462 START TEST nvme_single_aen 00:09:50.462 ************************************ 00:09:50.462 08:31:12 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:50.721 Asynchronous Event Request test 00:09:50.721 Attached to 0000:00:10.0 00:09:50.721 Attached to 0000:00:11.0 00:09:50.721 Attached to 0000:00:13.0 00:09:50.721 Attached to 0000:00:12.0 00:09:50.721 Reset controller to setup AER completions for this process 00:09:50.721 Registering asynchronous event callbacks... 00:09:50.721 Getting orig temperature thresholds of all controllers 00:09:50.721 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.721 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.721 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.721 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.721 Setting all controllers temperature threshold low to trigger AER 00:09:50.721 Waiting for all controllers temperature threshold to be set lower 00:09:50.721 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.721 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:50.721 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.721 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:50.721 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.721 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:50.721 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.721 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:50.721 Waiting for all controllers to trigger AER and reset threshold 00:09:50.721 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.721 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.721 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.721 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.721 Cleaning up... 00:09:50.721 00:09:50.721 real 0m0.239s 00:09:50.721 user 0m0.081s 00:09:50.721 sys 0m0.112s 00:09:50.721 08:31:12 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.721 08:31:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:50.721 ************************************ 00:09:50.721 END TEST nvme_single_aen 00:09:50.721 ************************************ 00:09:50.721 08:31:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:50.721 08:31:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.721 08:31:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.721 08:31:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.721 ************************************ 00:09:50.721 START TEST nvme_doorbell_aers 00:09:50.721 ************************************ 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:50.721 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:50.979 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:50.979 08:31:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:50.979 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:50.979 08:31:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:51.237 [2024-11-19 08:31:12.928914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:01.239 Executing: test_write_invalid_db 00:10:01.239 Waiting for AER completion... 00:10:01.239 Failure: test_write_invalid_db 00:10:01.239 00:10:01.239 Executing: test_invalid_db_write_overflow_sq 00:10:01.239 Waiting for AER completion... 00:10:01.239 Failure: test_invalid_db_write_overflow_sq 00:10:01.239 00:10:01.239 Executing: test_invalid_db_write_overflow_cq 00:10:01.239 Waiting for AER completion... 00:10:01.239 Failure: test_invalid_db_write_overflow_cq 00:10:01.239 00:10:01.239 08:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:01.239 08:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:01.239 [2024-11-19 08:31:22.947926] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:11.215 Executing: test_write_invalid_db 00:10:11.215 Waiting for AER completion... 00:10:11.215 Failure: test_write_invalid_db 00:10:11.215 00:10:11.215 Executing: test_invalid_db_write_overflow_sq 00:10:11.215 Waiting for AER completion... 00:10:11.215 Failure: test_invalid_db_write_overflow_sq 00:10:11.215 00:10:11.215 Executing: test_invalid_db_write_overflow_cq 00:10:11.215 Waiting for AER completion... 00:10:11.215 Failure: test_invalid_db_write_overflow_cq 00:10:11.215 00:10:11.215 08:31:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.215 08:31:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:11.215 [2024-11-19 08:31:32.989367] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:21.188 Executing: test_write_invalid_db 00:10:21.188 Waiting for AER completion... 00:10:21.188 Failure: test_write_invalid_db 00:10:21.188 00:10:21.189 Executing: test_invalid_db_write_overflow_sq 00:10:21.189 Waiting for AER completion... 00:10:21.189 Failure: test_invalid_db_write_overflow_sq 00:10:21.189 00:10:21.189 Executing: test_invalid_db_write_overflow_cq 00:10:21.189 Waiting for AER completion... 00:10:21.189 Failure: test_invalid_db_write_overflow_cq 00:10:21.189 00:10:21.189 08:31:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:21.189 08:31:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:21.189 [2024-11-19 08:31:43.036976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.173 Executing: test_write_invalid_db 00:10:31.173 Waiting for AER completion... 00:10:31.173 Failure: test_write_invalid_db 00:10:31.173 00:10:31.173 Executing: test_invalid_db_write_overflow_sq 00:10:31.173 Waiting for AER completion... 00:10:31.173 Failure: test_invalid_db_write_overflow_sq 00:10:31.173 00:10:31.173 Executing: test_invalid_db_write_overflow_cq 00:10:31.173 Waiting for AER completion... 00:10:31.173 Failure: test_invalid_db_write_overflow_cq 00:10:31.173 00:10:31.173 00:10:31.173 real 0m40.285s 00:10:31.173 user 0m33.264s 00:10:31.173 sys 0m6.705s 00:10:31.173 08:31:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.173 08:31:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:31.173 ************************************ 00:10:31.173 END TEST nvme_doorbell_aers 00:10:31.173 ************************************ 00:10:31.173 08:31:52 nvme -- nvme/nvme.sh@97 -- # uname 00:10:31.173 08:31:52 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:31.173 08:31:52 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:31.173 08:31:52 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:31.173 08:31:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.173 08:31:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.173 ************************************ 00:10:31.173 START TEST nvme_multi_aen 00:10:31.173 ************************************ 00:10:31.173 08:31:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:31.433 [2024-11-19 08:31:53.094650] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.094733] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.094769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.096436] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.096491] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.096508] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.097650] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.097697] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.097714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.098813] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.433 [2024-11-19 08:31:53.098857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.434 [2024-11-19 08:31:53.098875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75968) is not found. Dropping the request. 00:10:31.434 Child process pid: 76479 00:10:31.434 [Child] Asynchronous Event Request test 00:10:31.434 [Child] Attached to 0000:00:10.0 00:10:31.434 [Child] Attached to 0000:00:11.0 00:10:31.434 [Child] Attached to 0000:00:13.0 00:10:31.434 [Child] Attached to 0000:00:12.0 00:10:31.434 [Child] Registering asynchronous event callbacks... 00:10:31.434 [Child] Getting orig temperature thresholds of all controllers 00:10:31.434 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.434 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.434 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.434 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.434 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:31.434 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.434 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.434 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.434 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.434 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.434 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.434 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.434 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.434 [Child] Cleaning up... 00:10:31.721 Asynchronous Event Request test 00:10:31.721 Attached to 0000:00:10.0 00:10:31.721 Attached to 0000:00:11.0 00:10:31.721 Attached to 0000:00:13.0 00:10:31.721 Attached to 0000:00:12.0 00:10:31.721 Reset controller to setup AER completions for this process 00:10:31.721 Registering asynchronous event callbacks... 00:10:31.721 Getting orig temperature thresholds of all controllers 00:10:31.721 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.721 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.721 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.721 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.721 Setting all controllers temperature threshold low to trigger AER 00:10:31.721 Waiting for all controllers temperature threshold to be set lower 00:10:31.721 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.721 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:31.721 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.721 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:31.721 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.721 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:31.721 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.721 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:31.721 Waiting for all controllers to trigger AER and reset threshold 00:10:31.721 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.721 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.721 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.721 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.721 Cleaning up... 00:10:31.721 00:10:31.721 real 0m0.472s 00:10:31.721 user 0m0.163s 00:10:31.721 sys 0m0.201s 00:10:31.721 08:31:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.721 08:31:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:31.721 ************************************ 00:10:31.721 END TEST nvme_multi_aen 00:10:31.721 ************************************ 00:10:31.721 08:31:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:31.721 08:31:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:31.721 08:31:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.721 08:31:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.721 ************************************ 00:10:31.721 START TEST nvme_startup 00:10:31.721 ************************************ 00:10:31.721 08:31:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:31.980 Initializing NVMe Controllers 00:10:31.980 Attached to 0000:00:10.0 00:10:31.980 Attached to 0000:00:11.0 00:10:31.980 Attached to 0000:00:13.0 00:10:31.980 Attached to 0000:00:12.0 00:10:31.980 Initialization complete. 00:10:31.980 Time used:146795.000 (us). 00:10:31.980 00:10:31.980 real 0m0.228s 00:10:31.980 user 0m0.082s 00:10:31.980 sys 0m0.106s 00:10:31.980 08:31:53 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.980 08:31:53 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:31.980 ************************************ 00:10:31.980 END TEST nvme_startup 00:10:31.980 ************************************ 00:10:31.980 08:31:53 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:31.980 08:31:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.980 08:31:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.980 08:31:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.980 ************************************ 00:10:31.980 START TEST nvme_multi_secondary 00:10:31.980 ************************************ 00:10:31.980 08:31:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:31.980 08:31:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76535 00:10:31.980 08:31:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:31.980 08:31:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76536 00:10:31.980 08:31:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:31.980 08:31:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:35.357 Initializing NVMe Controllers 00:10:35.357 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:35.358 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.358 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:35.358 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:35.358 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:35.358 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:35.358 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:35.358 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:35.358 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:35.358 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:35.358 Initialization complete. Launching workers. 00:10:35.358 ======================================================== 00:10:35.358 Latency(us) 00:10:35.358 Device Information : IOPS MiB/s Average min max 00:10:35.358 PCIE (0000:00:10.0) NSID 1 from core 1: 5662.72 22.12 2822.94 898.56 6775.64 00:10:35.358 PCIE (0000:00:11.0) NSID 1 from core 1: 5662.72 22.12 2824.90 918.48 6865.09 00:10:35.358 PCIE (0000:00:13.0) NSID 1 from core 1: 5662.72 22.12 2824.74 926.22 6667.71 00:10:35.358 PCIE (0000:00:12.0) NSID 1 from core 1: 5662.72 22.12 2824.56 908.89 6076.52 00:10:35.358 PCIE (0000:00:12.0) NSID 2 from core 1: 5662.72 22.12 2824.61 922.34 6585.60 00:10:35.358 PCIE (0000:00:12.0) NSID 3 from core 1: 5662.72 22.12 2824.89 920.79 6597.09 00:10:35.358 ======================================================== 00:10:35.358 Total : 33976.30 132.72 2824.44 898.56 6865.09 00:10:35.358 00:10:35.358 Initializing NVMe Controllers 00:10:35.358 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:35.358 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.358 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:35.358 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:35.358 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:35.358 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:35.358 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:35.358 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:35.358 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:35.358 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:35.358 Initialization complete. Launching workers. 00:10:35.358 ======================================================== 00:10:35.358 Latency(us) 00:10:35.358 Device Information : IOPS MiB/s Average min max 00:10:35.358 PCIE (0000:00:10.0) NSID 1 from core 2: 3237.19 12.65 4936.71 1428.36 12625.72 00:10:35.358 PCIE (0000:00:11.0) NSID 1 from core 2: 3231.86 12.62 4943.08 1315.52 16579.07 00:10:35.358 PCIE (0000:00:13.0) NSID 1 from core 2: 3231.86 12.62 4943.03 1384.95 13315.19 00:10:35.358 PCIE (0000:00:12.0) NSID 1 from core 2: 3231.86 12.62 4943.37 1256.23 13200.53 00:10:35.358 PCIE (0000:00:12.0) NSID 2 from core 2: 3231.86 12.62 4942.60 1412.50 13208.47 00:10:35.358 PCIE (0000:00:12.0) NSID 3 from core 2: 3231.86 12.62 4942.94 1330.89 12775.22 00:10:35.358 ======================================================== 00:10:35.358 Total : 19396.50 75.77 4941.95 1256.23 16579.07 00:10:35.358 00:10:35.617 08:31:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76535 00:10:37.521 Initializing NVMe Controllers 00:10:37.521 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:37.521 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:37.521 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:37.521 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:37.521 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:37.521 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:37.521 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:37.521 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:37.521 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:37.521 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:37.521 Initialization complete. Launching workers. 00:10:37.521 ======================================================== 00:10:37.521 Latency(us) 00:10:37.521 Device Information : IOPS MiB/s Average min max 00:10:37.521 PCIE (0000:00:10.0) NSID 1 from core 0: 9157.07 35.77 1745.60 804.24 7143.93 00:10:37.521 PCIE (0000:00:11.0) NSID 1 from core 0: 9157.07 35.77 1746.72 827.52 7515.65 00:10:37.521 PCIE (0000:00:13.0) NSID 1 from core 0: 9157.07 35.77 1746.69 730.29 7417.68 00:10:37.521 PCIE (0000:00:12.0) NSID 1 from core 0: 9157.07 35.77 1746.65 627.75 7148.75 00:10:37.521 PCIE (0000:00:12.0) NSID 2 from core 0: 9157.07 35.77 1746.60 528.22 6871.38 00:10:37.522 PCIE (0000:00:12.0) NSID 3 from core 0: 9157.07 35.77 1746.56 434.15 6997.82 00:10:37.522 ======================================================== 00:10:37.522 Total : 54942.43 214.62 1746.47 434.15 7515.65 00:10:37.522 00:10:37.522 08:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76536 00:10:37.522 08:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76605 00:10:37.522 08:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:37.522 08:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76606 00:10:37.522 08:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:37.522 08:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:40.881 Initializing NVMe Controllers 00:10:40.881 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:40.881 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:40.881 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:40.881 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:40.881 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:40.881 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:40.881 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:40.881 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:40.881 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:40.881 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:40.881 Initialization complete. Launching workers. 00:10:40.881 ======================================================== 00:10:40.881 Latency(us) 00:10:40.881 Device Information : IOPS MiB/s Average min max 00:10:40.881 PCIE (0000:00:10.0) NSID 1 from core 0: 5954.88 23.26 2684.53 1111.67 7255.45 00:10:40.881 PCIE (0000:00:11.0) NSID 1 from core 0: 5954.88 23.26 2686.34 1160.67 7124.59 00:10:40.881 PCIE (0000:00:13.0) NSID 1 from core 0: 5954.88 23.26 2686.33 1148.94 7218.99 00:10:40.881 PCIE (0000:00:12.0) NSID 1 from core 0: 5954.88 23.26 2686.27 1140.56 7353.18 00:10:40.881 PCIE (0000:00:12.0) NSID 2 from core 0: 5954.88 23.26 2686.31 1117.33 6841.98 00:10:40.881 PCIE (0000:00:12.0) NSID 3 from core 0: 5954.88 23.26 2686.29 1131.50 6891.98 00:10:40.881 ======================================================== 00:10:40.881 Total : 35729.29 139.57 2686.01 1111.67 7353.18 00:10:40.881 00:10:40.881 Initializing NVMe Controllers 00:10:40.881 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:40.881 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:40.881 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:40.881 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:40.881 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:40.881 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:40.881 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:40.881 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:40.881 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:40.881 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:40.881 Initialization complete. Launching workers. 00:10:40.881 ======================================================== 00:10:40.881 Latency(us) 00:10:40.881 Device Information : IOPS MiB/s Average min max 00:10:40.881 PCIE (0000:00:10.0) NSID 1 from core 1: 5427.87 21.20 2945.23 864.63 7773.26 00:10:40.881 PCIE (0000:00:11.0) NSID 1 from core 1: 5427.87 21.20 2947.00 878.59 7704.90 00:10:40.881 PCIE (0000:00:13.0) NSID 1 from core 1: 5427.87 21.20 2946.95 867.68 7643.49 00:10:40.881 PCIE (0000:00:12.0) NSID 1 from core 1: 5427.87 21.20 2946.92 890.16 8579.09 00:10:40.881 PCIE (0000:00:12.0) NSID 2 from core 1: 5427.87 21.20 2946.86 888.77 7632.93 00:10:40.881 PCIE (0000:00:12.0) NSID 3 from core 1: 5427.87 21.20 2946.82 894.80 7737.36 00:10:40.881 ======================================================== 00:10:40.881 Total : 32567.24 127.22 2946.63 864.63 8579.09 00:10:40.881 00:10:42.835 Initializing NVMe Controllers 00:10:42.835 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:42.835 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:42.835 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:42.835 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:42.835 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:42.835 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:42.835 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:42.835 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:42.835 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:42.835 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:42.835 Initialization complete. Launching workers. 00:10:42.835 ======================================================== 00:10:42.835 Latency(us) 00:10:42.835 Device Information : IOPS MiB/s Average min max 00:10:42.835 PCIE (0000:00:10.0) NSID 1 from core 2: 3314.81 12.95 4824.68 970.15 17800.32 00:10:42.835 PCIE (0000:00:11.0) NSID 1 from core 2: 3314.81 12.95 4826.21 967.80 22047.22 00:10:42.835 PCIE (0000:00:13.0) NSID 1 from core 2: 3314.81 12.95 4826.44 984.16 18082.38 00:10:42.835 PCIE (0000:00:12.0) NSID 1 from core 2: 3314.81 12.95 4826.14 985.09 16984.80 00:10:42.835 PCIE (0000:00:12.0) NSID 2 from core 2: 3314.81 12.95 4826.05 961.90 20382.35 00:10:42.835 PCIE (0000:00:12.0) NSID 3 from core 2: 3314.81 12.95 4825.74 748.87 17273.50 00:10:42.835 ======================================================== 00:10:42.835 Total : 19888.83 77.69 4825.88 748.87 22047.22 00:10:42.835 00:10:42.835 08:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76605 00:10:42.835 08:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76606 00:10:42.835 00:10:42.835 real 0m10.618s 00:10:42.835 user 0m18.348s 00:10:42.835 sys 0m0.808s 00:10:42.835 08:32:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.835 08:32:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:42.835 ************************************ 00:10:42.835 END TEST nvme_multi_secondary 00:10:42.835 ************************************ 00:10:42.835 08:32:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:42.835 08:32:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/75561 ]] 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1094 -- # kill 75561 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1095 -- # wait 75561 00:10:42.835 [2024-11-19 08:32:04.413221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.413347] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.413394] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.413435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.414486] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.414567] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.414607] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.414651] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.415599] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.415681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.415753] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.415827] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.417040] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.417216] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.417303] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.417395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76478) is not found. Dropping the request. 00:10:42.835 [2024-11-19 08:32:04.501468] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:42.835 08:32:04 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.835 08:32:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:42.835 ************************************ 00:10:42.835 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:42.835 ************************************ 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:42.835 * Looking for test storage... 00:10:42.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:42.835 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.094 --rc genhtml_branch_coverage=1 00:10:43.094 --rc genhtml_function_coverage=1 00:10:43.094 --rc genhtml_legend=1 00:10:43.094 --rc geninfo_all_blocks=1 00:10:43.094 --rc geninfo_unexecuted_blocks=1 00:10:43.094 00:10:43.094 ' 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.094 --rc genhtml_branch_coverage=1 00:10:43.094 --rc genhtml_function_coverage=1 00:10:43.094 --rc genhtml_legend=1 00:10:43.094 --rc geninfo_all_blocks=1 00:10:43.094 --rc geninfo_unexecuted_blocks=1 00:10:43.094 00:10:43.094 ' 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.094 --rc genhtml_branch_coverage=1 00:10:43.094 --rc genhtml_function_coverage=1 00:10:43.094 --rc genhtml_legend=1 00:10:43.094 --rc geninfo_all_blocks=1 00:10:43.094 --rc geninfo_unexecuted_blocks=1 00:10:43.094 00:10:43.094 ' 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.094 --rc genhtml_branch_coverage=1 00:10:43.094 --rc genhtml_function_coverage=1 00:10:43.094 --rc genhtml_legend=1 00:10:43.094 --rc geninfo_all_blocks=1 00:10:43.094 --rc geninfo_unexecuted_blocks=1 00:10:43.094 00:10:43.094 ' 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:43.094 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76770 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76770 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 76770 ']' 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.095 08:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:43.095 [2024-11-19 08:32:04.963759] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:10:43.095 [2024-11-19 08:32:04.963927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76770 ] 00:10:43.354 [2024-11-19 08:32:05.116077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.354 [2024-11-19 08:32:05.144907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.354 [2024-11-19 08:32:05.145173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.354 [2024-11-19 08:32:05.145122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.354 [2024-11-19 08:32:05.145289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:44.290 nvme0n1 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_QUnCj.txt 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:44.290 true 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732005125 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76793 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:44.290 08:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:46.198 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:46.198 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.198 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:46.198 [2024-11-19 08:32:07.921883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:46.198 [2024-11-19 08:32:07.922168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:46.198 [2024-11-19 08:32:07.922213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:46.198 [2024-11-19 08:32:07.922230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.198 [2024-11-19 08:32:07.923961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:46.198 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76793 00:10:46.199 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76793 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76793 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:46.199 08:32:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_QUnCj.txt 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_QUnCj.txt 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76770 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 76770 ']' 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 76770 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76770 00:10:46.199 killing process with pid 76770 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76770' 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 76770 00:10:46.199 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 76770 00:10:46.766 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:46.766 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:46.766 00:10:46.766 real 0m3.901s 00:10:46.766 user 0m13.533s 00:10:46.766 sys 0m0.634s 00:10:46.766 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.766 08:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:46.767 ************************************ 00:10:46.767 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:46.767 ************************************ 00:10:46.767 08:32:08 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:46.767 08:32:08 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:46.767 08:32:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.767 08:32:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.767 08:32:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.767 ************************************ 00:10:46.767 START TEST nvme_fio 00:10:46.767 ************************************ 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:46.767 08:32:08 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:46.767 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:47.026 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:47.026 08:32:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:47.283 08:32:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:47.283 08:32:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:47.283 08:32:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:47.544 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:47.544 fio-3.35 00:10:47.544 Starting 1 thread 00:10:54.107 00:10:54.107 test: (groupid=0, jobs=1): err= 0: pid=76924: Tue Nov 19 08:32:15 2024 00:10:54.108 read: IOPS=22.3k, BW=86.9MiB/s (91.2MB/s)(174MiB/2001msec) 00:10:54.108 slat (usec): min=4, max=130, avg= 5.32, stdev= 1.39 00:10:54.108 clat (usec): min=217, max=13425, avg=2871.59, stdev=413.07 00:10:54.108 lat (usec): min=222, max=13556, avg=2876.91, stdev=413.82 00:10:54.108 clat percentiles (usec): 00:10:54.108 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:10:54.108 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:10:54.108 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3097], 00:10:54.108 | 99.00th=[ 4555], 99.50th=[ 5604], 99.90th=[ 8029], 99.95th=[ 9765], 00:10:54.108 | 99.99th=[13042] 00:10:54.108 bw ( KiB/s): min=86704, max=89392, per=98.50%, avg=87685.33, stdev=1483.55, samples=3 00:10:54.108 iops : min=21676, max=22348, avg=21921.33, stdev=370.89, samples=3 00:10:54.108 write: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec); 0 zone resets 00:10:54.108 slat (nsec): min=4661, max=64431, avg=5505.41, stdev=1367.22 00:10:54.108 clat (usec): min=201, max=13218, avg=2877.65, stdev=423.39 00:10:54.108 lat (usec): min=206, max=13243, avg=2883.15, stdev=424.11 00:10:54.108 clat percentiles (usec): 00:10:54.108 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:10:54.108 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:54.108 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3130], 00:10:54.108 | 99.00th=[ 4686], 99.50th=[ 5538], 99.90th=[ 8094], 99.95th=[10290], 00:10:54.108 | 99.99th=[12649] 00:10:54.108 bw ( KiB/s): min=86528, max=90272, per=99.35%, avg=87837.33, stdev=2110.49, samples=3 00:10:54.108 iops : min=21632, max=22566, avg=21958.67, stdev=526.47, samples=3 00:10:54.108 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:54.108 lat (msec) : 2=0.06%, 4=98.75%, 10=1.10%, 20=0.05% 00:10:54.108 cpu : usr=99.25%, sys=0.05%, ctx=13, majf=0, minf=625 00:10:54.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:54.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.108 issued rwts: total=44532,44227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.108 00:10:54.108 Run status group 0 (all jobs): 00:10:54.108 READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=174MiB (182MB), run=2001-2001msec 00:10:54.108 WRITE: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:54.108 ----------------------------------------------------- 00:10:54.108 Suppressions used: 00:10:54.108 count bytes template 00:10:54.108 1 32 /usr/src/fio/parse.c 00:10:54.108 1 8 libtcmalloc_minimal.so 00:10:54.108 ----------------------------------------------------- 00:10:54.108 00:10:54.108 08:32:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:54.108 08:32:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:54.108 08:32:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:54.108 08:32:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:54.108 08:32:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:54.108 08:32:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:54.367 08:32:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:54.367 08:32:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:54.367 08:32:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:54.626 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:54.626 fio-3.35 00:10:54.626 Starting 1 thread 00:11:01.224 00:11:01.224 test: (groupid=0, jobs=1): err= 0: pid=77006: Tue Nov 19 08:32:22 2024 00:11:01.225 read: IOPS=23.0k, BW=89.8MiB/s (94.2MB/s)(180MiB/2001msec) 00:11:01.225 slat (nsec): min=4377, max=70840, avg=5266.13, stdev=1153.58 00:11:01.225 clat (usec): min=280, max=13188, avg=2776.44, stdev=356.69 00:11:01.225 lat (usec): min=285, max=13259, avg=2781.70, stdev=357.29 00:11:01.225 clat percentiles (usec): 00:11:01.225 | 1.00th=[ 2540], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:01.225 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:01.225 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 3294], 00:11:01.225 | 99.00th=[ 3556], 99.50th=[ 4080], 99.90th=[ 7046], 99.95th=[ 9503], 00:11:01.225 | 99.99th=[12911] 00:11:01.225 bw ( KiB/s): min=84630, max=94192, per=98.24%, avg=90375.33, stdev=5064.36, samples=3 00:11:01.225 iops : min=21157, max=23548, avg=22593.67, stdev=1266.37, samples=3 00:11:01.225 write: IOPS=22.9k, BW=89.3MiB/s (93.6MB/s)(179MiB/2001msec); 0 zone resets 00:11:01.225 slat (nsec): min=4647, max=49298, avg=5479.68, stdev=1180.83 00:11:01.225 clat (usec): min=231, max=13024, avg=2783.94, stdev=363.69 00:11:01.225 lat (usec): min=237, max=13052, avg=2789.42, stdev=364.28 00:11:01.225 clat percentiles (usec): 00:11:01.225 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:01.225 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:11:01.225 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 3294], 00:11:01.225 | 99.00th=[ 3589], 99.50th=[ 4146], 99.90th=[ 7504], 99.95th=[10159], 00:11:01.225 | 99.99th=[12518] 00:11:01.225 bw ( KiB/s): min=84510, max=93680, per=99.09%, avg=90618.00, stdev=5289.69, samples=3 00:11:01.225 iops : min=21127, max=23420, avg=22654.33, stdev=1322.71, samples=3 00:11:01.225 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:01.225 lat (msec) : 2=0.07%, 4=99.29%, 10=0.55%, 20=0.05% 00:11:01.225 cpu : usr=99.20%, sys=0.20%, ctx=3, majf=0, minf=625 00:11:01.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:01.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.225 issued rwts: total=46020,45749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.225 00:11:01.225 Run status group 0 (all jobs): 00:11:01.225 READ: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=180MiB (188MB), run=2001-2001msec 00:11:01.225 WRITE: bw=89.3MiB/s (93.6MB/s), 89.3MiB/s-89.3MiB/s (93.6MB/s-93.6MB/s), io=179MiB (187MB), run=2001-2001msec 00:11:01.484 ----------------------------------------------------- 00:11:01.484 Suppressions used: 00:11:01.484 count bytes template 00:11:01.484 1 32 /usr/src/fio/parse.c 00:11:01.484 1 8 libtcmalloc_minimal.so 00:11:01.484 ----------------------------------------------------- 00:11:01.484 00:11:01.484 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:01.484 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:01.484 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:01.484 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:01.484 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:01.484 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:01.744 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:01.744 08:32:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:01.744 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:02.003 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:02.003 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:02.003 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:02.003 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:02.003 08:32:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:02.003 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:02.003 fio-3.35 00:11:02.003 Starting 1 thread 00:11:08.566 00:11:08.566 test: (groupid=0, jobs=1): err= 0: pid=77068: Tue Nov 19 08:32:30 2024 00:11:08.566 read: IOPS=21.4k, BW=83.5MiB/s (87.6MB/s)(167MiB/2001msec) 00:11:08.566 slat (nsec): min=4580, max=61828, avg=5635.06, stdev=2140.88 00:11:08.566 clat (usec): min=231, max=12435, avg=2989.68, stdev=832.76 00:11:08.566 lat (usec): min=236, max=12440, avg=2995.32, stdev=834.14 00:11:08.566 clat percentiles (usec): 00:11:08.566 | 1.00th=[ 1811], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:08.566 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:08.566 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 3064], 95.00th=[ 4293], 00:11:08.566 | 99.00th=[ 7832], 99.50th=[ 8356], 99.90th=[ 9896], 99.95th=[10683], 00:11:08.566 | 99.99th=[11731] 00:11:08.566 bw ( KiB/s): min=79304, max=87808, per=97.77%, avg=83645.33, stdev=4254.81, samples=3 00:11:08.566 iops : min=19826, max=21952, avg=20911.33, stdev=1063.70, samples=3 00:11:08.566 write: IOPS=21.2k, BW=82.9MiB/s (86.9MB/s)(166MiB/2001msec); 0 zone resets 00:11:08.566 slat (nsec): min=4658, max=62372, avg=5792.78, stdev=2111.50 00:11:08.566 clat (usec): min=308, max=12310, avg=2993.60, stdev=825.97 00:11:08.566 lat (usec): min=314, max=12316, avg=2999.39, stdev=827.33 00:11:08.566 clat percentiles (usec): 00:11:08.566 | 1.00th=[ 1778], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:08.566 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:08.566 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3097], 95.00th=[ 4293], 00:11:08.566 | 99.00th=[ 7635], 99.50th=[ 8291], 99.90th=[ 9634], 99.95th=[10552], 00:11:08.566 | 99.99th=[11469] 00:11:08.566 bw ( KiB/s): min=79192, max=87984, per=98.59%, avg=83709.33, stdev=4401.02, samples=3 00:11:08.566 iops : min=19798, max=21996, avg=20927.33, stdev=1100.26, samples=3 00:11:08.566 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:08.566 lat (msec) : 2=1.58%, 4=92.59%, 10=5.69%, 20=0.09% 00:11:08.566 cpu : usr=99.35%, sys=0.00%, ctx=3, majf=0, minf=626 00:11:08.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:08.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.566 issued rwts: total=42796,42475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.566 00:11:08.566 Run status group 0 (all jobs): 00:11:08.566 READ: bw=83.5MiB/s (87.6MB/s), 83.5MiB/s-83.5MiB/s (87.6MB/s-87.6MB/s), io=167MiB (175MB), run=2001-2001msec 00:11:08.566 WRITE: bw=82.9MiB/s (86.9MB/s), 82.9MiB/s-82.9MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:08.824 ----------------------------------------------------- 00:11:08.824 Suppressions used: 00:11:08.824 count bytes template 00:11:08.824 1 32 /usr/src/fio/parse.c 00:11:08.824 1 8 libtcmalloc_minimal.so 00:11:08.824 ----------------------------------------------------- 00:11:08.824 00:11:08.824 08:32:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:08.824 08:32:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:08.824 08:32:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:08.824 08:32:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:09.083 08:32:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:09.083 08:32:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:09.342 08:32:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:09.342 08:32:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:09.342 08:32:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:09.601 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:09.601 fio-3.35 00:11:09.601 Starting 1 thread 00:11:16.167 00:11:16.167 test: (groupid=0, jobs=1): err= 0: pid=77139: Tue Nov 19 08:32:37 2024 00:11:16.167 read: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(156MiB/2001msec) 00:11:16.167 slat (nsec): min=4555, max=57423, avg=6470.68, stdev=3159.03 00:11:16.167 clat (usec): min=248, max=14327, avg=3201.95, stdev=952.59 00:11:16.167 lat (usec): min=253, max=14363, avg=3208.43, stdev=955.08 00:11:16.167 clat percentiles (usec): 00:11:16.167 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2769], 00:11:16.167 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:11:16.167 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 5211], 00:11:16.167 | 99.00th=[ 7635], 99.50th=[ 8356], 99.90th=[ 9634], 99.95th=[11207], 00:11:16.167 | 99.99th=[14091] 00:11:16.167 bw ( KiB/s): min=76320, max=79224, per=98.01%, avg=77994.67, stdev=1502.35, samples=3 00:11:16.167 iops : min=19080, max=19806, avg=19498.67, stdev=375.59, samples=3 00:11:16.167 write: IOPS=19.8k, BW=77.5MiB/s (81.3MB/s)(155MiB/2001msec); 0 zone resets 00:11:16.167 slat (nsec): min=4675, max=56767, avg=6632.95, stdev=3165.75 00:11:16.167 clat (usec): min=227, max=14117, avg=3214.64, stdev=968.47 00:11:16.167 lat (usec): min=233, max=14135, avg=3221.27, stdev=970.97 00:11:16.167 clat percentiles (usec): 00:11:16.167 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2769], 00:11:16.167 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:11:16.167 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 5276], 00:11:16.167 | 99.00th=[ 7635], 99.50th=[ 8455], 99.90th=[ 9634], 99.95th=[11600], 00:11:16.167 | 99.99th=[13698] 00:11:16.167 bw ( KiB/s): min=76272, max=79600, per=98.45%, avg=78162.67, stdev=1709.69, samples=3 00:11:16.167 iops : min=19068, max=19900, avg=19540.67, stdev=427.42, samples=3 00:11:16.167 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:16.167 lat (msec) : 2=0.05%, 4=93.24%, 10=6.60%, 20=0.07% 00:11:16.167 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=624 00:11:16.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:16.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.167 issued rwts: total=39809,39717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.167 00:11:16.167 Run status group 0 (all jobs): 00:11:16.167 READ: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=156MiB (163MB), run=2001-2001msec 00:11:16.167 WRITE: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=155MiB (163MB), run=2001-2001msec 00:11:16.167 ----------------------------------------------------- 00:11:16.167 Suppressions used: 00:11:16.167 count bytes template 00:11:16.167 1 32 /usr/src/fio/parse.c 00:11:16.167 1 8 libtcmalloc_minimal.so 00:11:16.167 ----------------------------------------------------- 00:11:16.167 00:11:16.167 08:32:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:16.167 08:32:38 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:16.167 00:11:16.167 real 0m29.511s 00:11:16.167 user 0m17.014s 00:11:16.167 sys 0m23.518s 00:11:16.167 08:32:38 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.167 08:32:38 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:16.167 ************************************ 00:11:16.167 END TEST nvme_fio 00:11:16.167 ************************************ 00:11:16.167 ************************************ 00:11:16.167 END TEST nvme 00:11:16.167 ************************************ 00:11:16.167 00:11:16.167 real 1m40.047s 00:11:16.167 user 3m36.640s 00:11:16.167 sys 0m36.694s 00:11:16.167 08:32:38 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.167 08:32:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.427 08:32:38 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:16.427 08:32:38 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:16.427 08:32:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.427 08:32:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.427 08:32:38 -- common/autotest_common.sh@10 -- # set +x 00:11:16.427 ************************************ 00:11:16.427 START TEST nvme_scc 00:11:16.427 ************************************ 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:16.427 * Looking for test storage... 00:11:16.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.427 08:32:38 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.427 --rc genhtml_branch_coverage=1 00:11:16.427 --rc genhtml_function_coverage=1 00:11:16.427 --rc genhtml_legend=1 00:11:16.427 --rc geninfo_all_blocks=1 00:11:16.427 --rc geninfo_unexecuted_blocks=1 00:11:16.427 00:11:16.427 ' 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.427 --rc genhtml_branch_coverage=1 00:11:16.427 --rc genhtml_function_coverage=1 00:11:16.427 --rc genhtml_legend=1 00:11:16.427 --rc geninfo_all_blocks=1 00:11:16.427 --rc geninfo_unexecuted_blocks=1 00:11:16.427 00:11:16.427 ' 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.427 --rc genhtml_branch_coverage=1 00:11:16.427 --rc genhtml_function_coverage=1 00:11:16.427 --rc genhtml_legend=1 00:11:16.427 --rc geninfo_all_blocks=1 00:11:16.427 --rc geninfo_unexecuted_blocks=1 00:11:16.427 00:11:16.427 ' 00:11:16.427 08:32:38 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.427 --rc genhtml_branch_coverage=1 00:11:16.427 --rc genhtml_function_coverage=1 00:11:16.427 --rc genhtml_legend=1 00:11:16.427 --rc geninfo_all_blocks=1 00:11:16.427 --rc geninfo_unexecuted_blocks=1 00:11:16.427 00:11:16.427 ' 00:11:16.427 08:32:38 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.427 08:32:38 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.427 08:32:38 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.687 08:32:38 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.687 08:32:38 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.687 08:32:38 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.687 08:32:38 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.687 08:32:38 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.687 08:32:38 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.687 08:32:38 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.687 08:32:38 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:16.687 08:32:38 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:16.687 08:32:38 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:16.687 08:32:38 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.687 08:32:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:16.687 08:32:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:16.687 08:32:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:16.687 08:32:38 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:16.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.206 Waiting for block devices as requested 00:11:17.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.466 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.466 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.724 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.013 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.013 08:32:44 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:23.013 08:32:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.013 08:32:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:23.013 08:32:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.013 08:32:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:23.013 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:23.014 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.015 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.016 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:23.017 08:32:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.017 08:32:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:23.017 08:32:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.017 08:32:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:23.017 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.018 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.019 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.020 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:23.021 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:23.022 08:32:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.022 08:32:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:23.022 08:32:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.022 08:32:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:23.022 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:23.023 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:23.024 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:23.025 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:23.027 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:23.028 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.029 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:23.031 08:32:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.031 08:32:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:23.031 08:32:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.031 08:32:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:23.034 08:32:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.034 08:32:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:23.035 08:32:44 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:23.035 08:32:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:23.035 08:32:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:23.035 08:32:44 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:23.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.172 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.172 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.172 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.431 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.431 08:32:46 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:24.431 08:32:46 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.431 08:32:46 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.431 08:32:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:24.431 ************************************ 00:11:24.431 START TEST nvme_simple_copy 00:11:24.431 ************************************ 00:11:24.431 08:32:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:24.690 Initializing NVMe Controllers 00:11:24.690 Attaching to 0000:00:10.0 00:11:24.690 Controller supports SCC. Attached to 0000:00:10.0 00:11:24.690 Namespace ID: 1 size: 6GB 00:11:24.690 Initialization complete. 00:11:24.690 00:11:24.690 Controller QEMU NVMe Ctrl (12340 ) 00:11:24.690 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:24.690 Namespace Block Size:4096 00:11:24.690 Writing LBAs 0 to 63 with Random Data 00:11:24.690 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:24.690 LBAs matching Written Data: 64 00:11:24.690 00:11:24.690 real 0m0.274s 00:11:24.690 user 0m0.100s 00:11:24.690 sys 0m0.073s 00:11:24.690 08:32:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.690 08:32:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 ************************************ 00:11:24.690 END TEST nvme_simple_copy 00:11:24.690 ************************************ 00:11:24.690 00:11:24.690 real 0m8.458s 00:11:24.690 user 0m1.362s 00:11:24.690 sys 0m2.053s 00:11:24.690 08:32:46 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.690 08:32:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 ************************************ 00:11:24.690 END TEST nvme_scc 00:11:24.690 ************************************ 00:11:24.950 08:32:46 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:24.950 08:32:46 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:24.950 08:32:46 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:24.950 08:32:46 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:24.950 08:32:46 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:24.950 08:32:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.950 08:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.950 08:32:46 -- common/autotest_common.sh@10 -- # set +x 00:11:24.950 ************************************ 00:11:24.950 START TEST nvme_fdp 00:11:24.950 ************************************ 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:24.950 * Looking for test storage... 00:11:24.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.950 08:32:46 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.950 --rc genhtml_branch_coverage=1 00:11:24.950 --rc genhtml_function_coverage=1 00:11:24.950 --rc genhtml_legend=1 00:11:24.950 --rc geninfo_all_blocks=1 00:11:24.950 --rc geninfo_unexecuted_blocks=1 00:11:24.950 00:11:24.950 ' 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.950 --rc genhtml_branch_coverage=1 00:11:24.950 --rc genhtml_function_coverage=1 00:11:24.950 --rc genhtml_legend=1 00:11:24.950 --rc geninfo_all_blocks=1 00:11:24.950 --rc geninfo_unexecuted_blocks=1 00:11:24.950 00:11:24.950 ' 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.950 --rc genhtml_branch_coverage=1 00:11:24.950 --rc genhtml_function_coverage=1 00:11:24.950 --rc genhtml_legend=1 00:11:24.950 --rc geninfo_all_blocks=1 00:11:24.950 --rc geninfo_unexecuted_blocks=1 00:11:24.950 00:11:24.950 ' 00:11:24.950 08:32:46 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.950 --rc genhtml_branch_coverage=1 00:11:24.950 --rc genhtml_function_coverage=1 00:11:24.950 --rc genhtml_legend=1 00:11:24.950 --rc geninfo_all_blocks=1 00:11:24.950 --rc geninfo_unexecuted_blocks=1 00:11:24.950 00:11:24.950 ' 00:11:24.950 08:32:46 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.950 08:32:46 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.950 08:32:46 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:24.951 08:32:46 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:24.951 08:32:46 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.951 08:32:46 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.951 08:32:46 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.210 08:32:46 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.210 08:32:46 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.210 08:32:46 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.210 08:32:46 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.210 08:32:46 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.210 08:32:46 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:25.210 08:32:46 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.210 08:32:46 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:25.211 08:32:46 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:25.211 08:32:46 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.211 08:32:46 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:25.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.729 Waiting for block devices as requested 00:11:25.988 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.988 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.988 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.988 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.273 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:31.273 08:32:52 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:31.273 08:32:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:31.273 08:32:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:31.273 08:32:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.273 08:32:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.273 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:31.274 08:32:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.274 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:31.275 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.276 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:31.277 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.278 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.279 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:31.280 08:32:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:31.280 08:32:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:31.280 08:32:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.280 08:32:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.280 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:31.281 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.282 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.283 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.284 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.285 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:31.286 08:32:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:31.286 08:32:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:31.286 08:32:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.286 08:32:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.286 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.287 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.578 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:31.579 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.580 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.581 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:31.582 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:31.583 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.584 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:31.585 08:32:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:31.585 08:32:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:31.585 08:32:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:31.585 08:32:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.585 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.586 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.587 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:31.588 08:32:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:31.588 08:32:53 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:31.588 08:32:53 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.096 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.096 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.096 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.096 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.096 08:32:54 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:33.096 08:32:54 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.096 08:32:54 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.096 08:32:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:33.096 ************************************ 00:11:33.096 START TEST nvme_flexible_data_placement 00:11:33.096 ************************************ 00:11:33.096 08:32:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:33.355 Initializing NVMe Controllers 00:11:33.355 Attaching to 0000:00:13.0 00:11:33.355 Controller supports FDP Attached to 0000:00:13.0 00:11:33.355 Namespace ID: 1 Endurance Group ID: 1 00:11:33.355 Initialization complete. 00:11:33.355 00:11:33.355 ================================== 00:11:33.355 == FDP tests for Namespace: #01 == 00:11:33.355 ================================== 00:11:33.355 00:11:33.355 Get Feature: FDP: 00:11:33.355 ================= 00:11:33.355 Enabled: Yes 00:11:33.355 FDP configuration Index: 0 00:11:33.355 00:11:33.355 FDP configurations log page 00:11:33.355 =========================== 00:11:33.355 Number of FDP configurations: 1 00:11:33.355 Version: 0 00:11:33.355 Size: 112 00:11:33.355 FDP Configuration Descriptor: 0 00:11:33.355 Descriptor Size: 96 00:11:33.355 Reclaim Group Identifier format: 2 00:11:33.355 FDP Volatile Write Cache: Not Present 00:11:33.355 FDP Configuration: Valid 00:11:33.355 Vendor Specific Size: 0 00:11:33.355 Number of Reclaim Groups: 2 00:11:33.355 Number of Recalim Unit Handles: 8 00:11:33.355 Max Placement Identifiers: 128 00:11:33.355 Number of Namespaces Suppprted: 256 00:11:33.355 Reclaim unit Nominal Size: 6000000 bytes 00:11:33.356 Estimated Reclaim Unit Time Limit: Not Reported 00:11:33.356 RUH Desc #000: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #001: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #002: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #003: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #004: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #005: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #006: RUH Type: Initially Isolated 00:11:33.356 RUH Desc #007: RUH Type: Initially Isolated 00:11:33.356 00:11:33.356 FDP reclaim unit handle usage log page 00:11:33.356 ====================================== 00:11:33.356 Number of Reclaim Unit Handles: 8 00:11:33.356 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:33.356 RUH Usage Desc #001: RUH Attributes: Unused 00:11:33.356 RUH Usage Desc #002: RUH Attributes: Unused 00:11:33.356 RUH Usage Desc #003: RUH Attributes: Unused 00:11:33.356 RUH Usage Desc #004: RUH Attributes: Unused 00:11:33.356 RUH Usage Desc #005: RUH Attributes: Unused 00:11:33.356 RUH Usage Desc #006: RUH Attributes: Unused 00:11:33.356 RUH Usage Desc #007: RUH Attributes: Unused 00:11:33.356 00:11:33.356 FDP statistics log page 00:11:33.356 ======================= 00:11:33.356 Host bytes with metadata written: 1686437888 00:11:33.356 Media bytes with metadata written: 1687281664 00:11:33.356 Media bytes erased: 0 00:11:33.356 00:11:33.356 FDP Reclaim unit handle status 00:11:33.356 ============================== 00:11:33.356 Number of RUHS descriptors: 2 00:11:33.356 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000017b0 00:11:33.356 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:33.356 00:11:33.356 FDP write on placement id: 0 success 00:11:33.356 00:11:33.356 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:33.356 00:11:33.356 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:33.356 00:11:33.356 Get Feature: FDP Events for Placement handle: #0 00:11:33.356 ======================== 00:11:33.356 Number of FDP Events: 6 00:11:33.356 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:33.356 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:33.356 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:33.356 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:33.356 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:33.356 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:33.356 00:11:33.356 FDP events log page 00:11:33.356 =================== 00:11:33.356 Number of FDP events: 1 00:11:33.356 FDP Event #0: 00:11:33.356 Event Type: RU Not Written to Capacity 00:11:33.356 Placement Identifier: Valid 00:11:33.356 NSID: Valid 00:11:33.356 Location: Valid 00:11:33.356 Placement Identifier: 0 00:11:33.356 Event Timestamp: 2 00:11:33.356 Namespace Identifier: 1 00:11:33.356 Reclaim Group Identifier: 0 00:11:33.356 Reclaim Unit Handle Identifier: 0 00:11:33.356 00:11:33.356 FDP test passed 00:11:33.356 00:11:33.356 real 0m0.248s 00:11:33.356 user 0m0.089s 00:11:33.356 sys 0m0.058s 00:11:33.356 08:32:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.356 08:32:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:33.356 ************************************ 00:11:33.356 END TEST nvme_flexible_data_placement 00:11:33.356 ************************************ 00:11:33.356 ************************************ 00:11:33.356 END TEST nvme_fdp 00:11:33.356 ************************************ 00:11:33.356 00:11:33.356 real 0m8.560s 00:11:33.356 user 0m1.408s 00:11:33.356 sys 0m2.231s 00:11:33.356 08:32:55 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.356 08:32:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:33.356 08:32:55 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:33.356 08:32:55 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:33.356 08:32:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.356 08:32:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.356 08:32:55 -- common/autotest_common.sh@10 -- # set +x 00:11:33.356 ************************************ 00:11:33.356 START TEST nvme_rpc 00:11:33.356 ************************************ 00:11:33.356 08:32:55 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:33.616 * Looking for test storage... 00:11:33.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.616 08:32:55 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.616 --rc genhtml_branch_coverage=1 00:11:33.616 --rc genhtml_function_coverage=1 00:11:33.616 --rc genhtml_legend=1 00:11:33.616 --rc geninfo_all_blocks=1 00:11:33.616 --rc geninfo_unexecuted_blocks=1 00:11:33.616 00:11:33.616 ' 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.616 --rc genhtml_branch_coverage=1 00:11:33.616 --rc genhtml_function_coverage=1 00:11:33.616 --rc genhtml_legend=1 00:11:33.616 --rc geninfo_all_blocks=1 00:11:33.616 --rc geninfo_unexecuted_blocks=1 00:11:33.616 00:11:33.616 ' 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.616 --rc genhtml_branch_coverage=1 00:11:33.616 --rc genhtml_function_coverage=1 00:11:33.616 --rc genhtml_legend=1 00:11:33.616 --rc geninfo_all_blocks=1 00:11:33.616 --rc geninfo_unexecuted_blocks=1 00:11:33.616 00:11:33.616 ' 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.616 --rc genhtml_branch_coverage=1 00:11:33.616 --rc genhtml_function_coverage=1 00:11:33.616 --rc genhtml_legend=1 00:11:33.616 --rc geninfo_all_blocks=1 00:11:33.616 --rc geninfo_unexecuted_blocks=1 00:11:33.616 00:11:33.616 ' 00:11:33.616 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.616 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:33.616 08:32:55 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:33.876 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:33.876 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78548 00:11:33.876 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:33.876 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:33.876 08:32:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78548 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 78548 ']' 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.876 08:32:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.876 [2024-11-19 08:32:55.686638] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:33.876 [2024-11-19 08:32:55.686799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78548 ] 00:11:34.136 [2024-11-19 08:32:55.846201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:34.136 [2024-11-19 08:32:55.877839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.136 [2024-11-19 08:32:55.877938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.704 08:32:56 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.704 08:32:56 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:34.704 08:32:56 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:34.962 Nvme0n1 00:11:34.962 08:32:56 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:34.962 08:32:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:35.231 request: 00:11:35.231 { 00:11:35.231 "bdev_name": "Nvme0n1", 00:11:35.231 "filename": "non_existing_file", 00:11:35.231 "method": "bdev_nvme_apply_firmware", 00:11:35.231 "req_id": 1 00:11:35.231 } 00:11:35.231 Got JSON-RPC error response 00:11:35.231 response: 00:11:35.231 { 00:11:35.231 "code": -32603, 00:11:35.231 "message": "open file failed." 00:11:35.231 } 00:11:35.231 08:32:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:35.231 08:32:57 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:35.231 08:32:57 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:35.502 08:32:57 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:35.502 08:32:57 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78548 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 78548 ']' 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 78548 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78548 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.502 killing process with pid 78548 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.502 08:32:57 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78548' 00:11:35.503 08:32:57 nvme_rpc -- common/autotest_common.sh@973 -- # kill 78548 00:11:35.503 08:32:57 nvme_rpc -- common/autotest_common.sh@978 -- # wait 78548 00:11:35.762 00:11:35.762 real 0m2.414s 00:11:35.762 user 0m4.430s 00:11:35.762 sys 0m0.711s 00:11:35.762 08:32:57 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.762 08:32:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.762 ************************************ 00:11:35.762 END TEST nvme_rpc 00:11:35.762 ************************************ 00:11:36.021 08:32:57 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:36.021 08:32:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:36.021 08:32:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.021 08:32:57 -- common/autotest_common.sh@10 -- # set +x 00:11:36.021 ************************************ 00:11:36.021 START TEST nvme_rpc_timeouts 00:11:36.021 ************************************ 00:11:36.021 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:36.021 * Looking for test storage... 00:11:36.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:36.021 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.021 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.021 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.281 08:32:57 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:36.281 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.281 --rc genhtml_branch_coverage=1 00:11:36.281 --rc genhtml_function_coverage=1 00:11:36.281 --rc genhtml_legend=1 00:11:36.281 --rc geninfo_all_blocks=1 00:11:36.281 --rc geninfo_unexecuted_blocks=1 00:11:36.281 00:11:36.281 ' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.281 --rc genhtml_branch_coverage=1 00:11:36.281 --rc genhtml_function_coverage=1 00:11:36.281 --rc genhtml_legend=1 00:11:36.281 --rc geninfo_all_blocks=1 00:11:36.281 --rc geninfo_unexecuted_blocks=1 00:11:36.281 00:11:36.281 ' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.281 --rc genhtml_branch_coverage=1 00:11:36.281 --rc genhtml_function_coverage=1 00:11:36.281 --rc genhtml_legend=1 00:11:36.281 --rc geninfo_all_blocks=1 00:11:36.281 --rc geninfo_unexecuted_blocks=1 00:11:36.281 00:11:36.281 ' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.281 --rc genhtml_branch_coverage=1 00:11:36.281 --rc genhtml_function_coverage=1 00:11:36.281 --rc genhtml_legend=1 00:11:36.281 --rc geninfo_all_blocks=1 00:11:36.281 --rc geninfo_unexecuted_blocks=1 00:11:36.281 00:11:36.281 ' 00:11:36.281 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.281 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78602 00:11:36.281 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78602 00:11:36.281 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78634 00:11:36.281 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:36.281 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:36.282 08:32:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78634 00:11:36.282 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 78634 ']' 00:11:36.282 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.282 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.282 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.282 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.282 08:32:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 [2024-11-19 08:32:58.039225] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:11:36.282 [2024-11-19 08:32:58.039352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78634 ] 00:11:36.541 [2024-11-19 08:32:58.195727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.541 [2024-11-19 08:32:58.225942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.541 [2024-11-19 08:32:58.226038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.110 08:32:58 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.110 08:32:58 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:37.110 Checking default timeout settings: 00:11:37.110 08:32:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:37.110 08:32:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:37.369 Making settings changes with rpc: 00:11:37.369 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:37.370 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:37.629 Check default vs. modified settings: 00:11:37.629 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:37.629 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:38.199 Setting action_on_timeout is changed as expected. 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:38.199 Setting timeout_us is changed as expected. 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:38.199 Setting timeout_admin_us is changed as expected. 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78602 /tmp/settings_modified_78602 00:11:38.199 08:32:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78634 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 78634 ']' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 78634 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78634 00:11:38.199 killing process with pid 78634 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78634' 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 78634 00:11:38.199 08:32:59 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 78634 00:11:38.459 RPC TIMEOUT SETTING TEST PASSED. 00:11:38.459 08:33:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:38.459 00:11:38.459 real 0m2.585s 00:11:38.459 user 0m5.061s 00:11:38.459 sys 0m0.680s 00:11:38.459 08:33:00 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.459 08:33:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:38.459 ************************************ 00:11:38.459 END TEST nvme_rpc_timeouts 00:11:38.459 ************************************ 00:11:38.459 08:33:00 -- spdk/autotest.sh@239 -- # uname -s 00:11:38.459 08:33:00 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:38.459 08:33:00 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:38.719 08:33:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.719 08:33:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.719 08:33:00 -- common/autotest_common.sh@10 -- # set +x 00:11:38.719 ************************************ 00:11:38.719 START TEST sw_hotplug 00:11:38.719 ************************************ 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:38.719 * Looking for test storage... 00:11:38.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.719 08:33:00 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.719 --rc genhtml_branch_coverage=1 00:11:38.719 --rc genhtml_function_coverage=1 00:11:38.719 --rc genhtml_legend=1 00:11:38.719 --rc geninfo_all_blocks=1 00:11:38.719 --rc geninfo_unexecuted_blocks=1 00:11:38.719 00:11:38.719 ' 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.719 --rc genhtml_branch_coverage=1 00:11:38.719 --rc genhtml_function_coverage=1 00:11:38.719 --rc genhtml_legend=1 00:11:38.719 --rc geninfo_all_blocks=1 00:11:38.719 --rc geninfo_unexecuted_blocks=1 00:11:38.719 00:11:38.719 ' 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.719 --rc genhtml_branch_coverage=1 00:11:38.719 --rc genhtml_function_coverage=1 00:11:38.719 --rc genhtml_legend=1 00:11:38.719 --rc geninfo_all_blocks=1 00:11:38.719 --rc geninfo_unexecuted_blocks=1 00:11:38.719 00:11:38.719 ' 00:11:38.719 08:33:00 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.719 --rc genhtml_branch_coverage=1 00:11:38.719 --rc genhtml_function_coverage=1 00:11:38.719 --rc genhtml_legend=1 00:11:38.719 --rc geninfo_all_blocks=1 00:11:38.719 --rc geninfo_unexecuted_blocks=1 00:11:38.719 00:11:38.719 ' 00:11:38.719 08:33:00 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:39.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.574 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.574 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.574 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.574 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:39.574 08:33:01 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:39.574 08:33:01 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:40.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.403 Waiting for block devices as requested 00:11:40.403 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.663 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.663 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.934 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:45.934 08:33:07 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:45.934 08:33:07 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:46.503 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:46.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.503 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:46.762 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:47.328 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.328 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:47.328 08:33:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79499 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:47.328 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:47.328 08:33:09 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:47.329 08:33:09 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:47.329 08:33:09 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:47.329 08:33:09 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:47.329 08:33:09 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:47.329 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:47.329 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:47.329 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:47.329 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:47.329 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:47.587 Initializing NVMe Controllers 00:11:47.587 Attaching to 0000:00:10.0 00:11:47.587 Attaching to 0000:00:11.0 00:11:47.587 Attached to 0000:00:10.0 00:11:47.587 Attached to 0000:00:11.0 00:11:47.587 Initialization complete. Starting I/O... 00:11:47.587 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:47.587 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:47.587 00:11:48.521 QEMU NVMe Ctrl (12340 ): 2036 I/Os completed (+2036) 00:11:48.521 QEMU NVMe Ctrl (12341 ): 2037 I/Os completed (+2037) 00:11:48.521 00:11:49.461 QEMU NVMe Ctrl (12340 ): 4808 I/Os completed (+2772) 00:11:49.461 QEMU NVMe Ctrl (12341 ): 4811 I/Os completed (+2774) 00:11:49.461 00:11:50.840 QEMU NVMe Ctrl (12340 ): 7496 I/Os completed (+2688) 00:11:50.840 QEMU NVMe Ctrl (12341 ): 7518 I/Os completed (+2707) 00:11:50.840 00:11:51.775 QEMU NVMe Ctrl (12340 ): 10300 I/Os completed (+2804) 00:11:51.775 QEMU NVMe Ctrl (12341 ): 10325 I/Os completed (+2807) 00:11:51.775 00:11:52.710 QEMU NVMe Ctrl (12340 ): 13104 I/Os completed (+2804) 00:11:52.710 QEMU NVMe Ctrl (12341 ): 13137 I/Os completed (+2812) 00:11:52.710 00:11:53.296 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:53.296 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.296 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.296 [2024-11-19 08:33:15.174386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:53.296 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:53.296 [2024-11-19 08:33:15.175659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 [2024-11-19 08:33:15.175706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 [2024-11-19 08:33:15.175733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 [2024-11-19 08:33:15.175748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:53.296 [2024-11-19 08:33:15.177240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 [2024-11-19 08:33:15.177303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 [2024-11-19 08:33:15.177321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.296 [2024-11-19 08:33:15.177342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.557 [2024-11-19 08:33:15.206454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:53.557 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:53.557 [2024-11-19 08:33:15.207662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 [2024-11-19 08:33:15.207713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 [2024-11-19 08:33:15.207754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 [2024-11-19 08:33:15.207770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:53.557 [2024-11-19 08:33:15.209196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 [2024-11-19 08:33:15.209238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 [2024-11-19 08:33:15.209262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 [2024-11-19 08:33:15.209277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.557 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:53.557 EAL: Scan for (pci) bus failed. 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.557 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.557 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.557 Attaching to 0000:00:10.0 00:11:53.557 Attached to 0000:00:10.0 00:11:53.817 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.817 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.817 08:33:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:53.817 Attaching to 0000:00:11.0 00:11:53.817 Attached to 0000:00:11.0 00:11:54.755 QEMU NVMe Ctrl (12340 ): 2724 I/Os completed (+2724) 00:11:54.755 QEMU NVMe Ctrl (12341 ): 2446 I/Os completed (+2446) 00:11:54.755 00:11:55.694 QEMU NVMe Ctrl (12340 ): 5560 I/Os completed (+2836) 00:11:55.694 QEMU NVMe Ctrl (12341 ): 5289 I/Os completed (+2843) 00:11:55.694 00:11:56.634 QEMU NVMe Ctrl (12340 ): 8436 I/Os completed (+2876) 00:11:56.634 QEMU NVMe Ctrl (12341 ): 8173 I/Os completed (+2884) 00:11:56.634 00:11:57.624 QEMU NVMe Ctrl (12340 ): 11088 I/Os completed (+2652) 00:11:57.624 QEMU NVMe Ctrl (12341 ): 10890 I/Os completed (+2717) 00:11:57.624 00:11:58.563 QEMU NVMe Ctrl (12340 ): 13868 I/Os completed (+2780) 00:11:58.563 QEMU NVMe Ctrl (12341 ): 13685 I/Os completed (+2795) 00:11:58.563 00:11:59.503 QEMU NVMe Ctrl (12340 ): 16752 I/Os completed (+2884) 00:11:59.503 QEMU NVMe Ctrl (12341 ): 16576 I/Os completed (+2891) 00:11:59.503 00:12:00.443 QEMU NVMe Ctrl (12340 ): 19560 I/Os completed (+2808) 00:12:00.443 QEMU NVMe Ctrl (12341 ): 19404 I/Os completed (+2828) 00:12:00.443 00:12:01.839 QEMU NVMe Ctrl (12340 ): 22332 I/Os completed (+2772) 00:12:01.839 QEMU NVMe Ctrl (12341 ): 22189 I/Os completed (+2785) 00:12:01.839 00:12:02.778 QEMU NVMe Ctrl (12340 ): 25164 I/Os completed (+2832) 00:12:02.778 QEMU NVMe Ctrl (12341 ): 25024 I/Os completed (+2835) 00:12:02.778 00:12:03.717 QEMU NVMe Ctrl (12340 ): 27950 I/Os completed (+2786) 00:12:03.717 QEMU NVMe Ctrl (12341 ): 27815 I/Os completed (+2791) 00:12:03.717 00:12:04.656 QEMU NVMe Ctrl (12340 ): 30490 I/Os completed (+2540) 00:12:04.656 QEMU NVMe Ctrl (12341 ): 30373 I/Os completed (+2558) 00:12:04.656 00:12:05.596 QEMU NVMe Ctrl (12340 ): 33250 I/Os completed (+2760) 00:12:05.596 QEMU NVMe Ctrl (12341 ): 33133 I/Os completed (+2760) 00:12:05.596 00:12:05.596 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:05.596 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.596 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.596 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.596 [2024-11-19 08:33:27.495139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.596 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:05.596 [2024-11-19 08:33:27.496385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 [2024-11-19 08:33:27.496434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 [2024-11-19 08:33:27.496452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 [2024-11-19 08:33:27.496472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:05.596 [2024-11-19 08:33:27.497985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 [2024-11-19 08:33:27.498030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 [2024-11-19 08:33:27.498046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.596 [2024-11-19 08:33:27.498061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.856 [2024-11-19 08:33:27.529542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:05.856 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:05.856 [2024-11-19 08:33:27.531236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 [2024-11-19 08:33:27.531290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 [2024-11-19 08:33:27.531313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 [2024-11-19 08:33:27.531330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:05.856 [2024-11-19 08:33:27.533129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 [2024-11-19 08:33:27.533178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 [2024-11-19 08:33:27.533203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 [2024-11-19 08:33:27.533221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:05.856 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:05.856 EAL: Scan for (pci) bus failed. 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.856 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.856 Attaching to 0000:00:10.0 00:12:05.856 Attached to 0000:00:10.0 00:12:06.115 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:06.115 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.115 08:33:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:06.115 Attaching to 0000:00:11.0 00:12:06.115 Attached to 0000:00:11.0 00:12:06.686 QEMU NVMe Ctrl (12340 ): 1564 I/Os completed (+1564) 00:12:06.686 QEMU NVMe Ctrl (12341 ): 1351 I/Os completed (+1351) 00:12:06.686 00:12:07.626 QEMU NVMe Ctrl (12340 ): 4404 I/Os completed (+2840) 00:12:07.626 QEMU NVMe Ctrl (12341 ): 4192 I/Os completed (+2841) 00:12:07.626 00:12:08.635 QEMU NVMe Ctrl (12340 ): 7124 I/Os completed (+2720) 00:12:08.635 QEMU NVMe Ctrl (12341 ): 6922 I/Os completed (+2730) 00:12:08.635 00:12:09.574 QEMU NVMe Ctrl (12340 ): 9860 I/Os completed (+2736) 00:12:09.574 QEMU NVMe Ctrl (12341 ): 9689 I/Os completed (+2767) 00:12:09.574 00:12:10.514 QEMU NVMe Ctrl (12340 ): 12720 I/Os completed (+2860) 00:12:10.514 QEMU NVMe Ctrl (12341 ): 12561 I/Os completed (+2872) 00:12:10.514 00:12:11.451 QEMU NVMe Ctrl (12340 ): 15572 I/Os completed (+2852) 00:12:11.451 QEMU NVMe Ctrl (12341 ): 15416 I/Os completed (+2855) 00:12:11.451 00:12:12.830 QEMU NVMe Ctrl (12340 ): 18300 I/Os completed (+2728) 00:12:12.830 QEMU NVMe Ctrl (12341 ): 18186 I/Os completed (+2770) 00:12:12.830 00:12:13.768 QEMU NVMe Ctrl (12340 ): 21028 I/Os completed (+2728) 00:12:13.768 QEMU NVMe Ctrl (12341 ): 20924 I/Os completed (+2738) 00:12:13.768 00:12:14.706 QEMU NVMe Ctrl (12340 ): 23884 I/Os completed (+2856) 00:12:14.706 QEMU NVMe Ctrl (12341 ): 23785 I/Os completed (+2861) 00:12:14.706 00:12:15.660 QEMU NVMe Ctrl (12340 ): 26688 I/Os completed (+2804) 00:12:15.660 QEMU NVMe Ctrl (12341 ): 26609 I/Os completed (+2824) 00:12:15.660 00:12:16.619 QEMU NVMe Ctrl (12340 ): 29376 I/Os completed (+2688) 00:12:16.619 QEMU NVMe Ctrl (12341 ): 29305 I/Os completed (+2696) 00:12:16.619 00:12:17.555 QEMU NVMe Ctrl (12340 ): 32186 I/Os completed (+2810) 00:12:17.555 QEMU NVMe Ctrl (12341 ): 32121 I/Os completed (+2816) 00:12:17.555 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.123 [2024-11-19 08:33:39.805141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:18.123 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:18.123 [2024-11-19 08:33:39.806433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.806501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.806519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.806538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:18.123 [2024-11-19 08:33:39.808093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.808134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.808149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.808167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.123 [2024-11-19 08:33:39.835949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:18.123 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:18.123 [2024-11-19 08:33:39.837514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.837573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.837599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.837621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:18.123 [2024-11-19 08:33:39.839536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.839588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.839611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 [2024-11-19 08:33:39.839630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:18.123 08:33:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.123 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.123 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.123 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.123 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.123 Attaching to 0000:00:10.0 00:12:18.123 Attached to 0000:00:10.0 00:12:18.382 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.382 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.382 08:33:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.382 Attaching to 0000:00:11.0 00:12:18.382 Attached to 0000:00:11.0 00:12:18.382 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:18.382 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:18.382 [2024-11-19 08:33:40.117031] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:30.666 08:33:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:30.666 08:33:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:30.666 08:33:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.94 00:12:30.666 08:33:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.94 00:12:30.666 08:33:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:30.666 08:33:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:12:30.666 08:33:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:12:30.666 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 08:33:52 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79499 00:12:37.262 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79499) - No such process 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79499 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80048 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:37.262 08:33:58 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80048 00:12:37.262 08:33:58 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80048 ']' 00:12:37.262 08:33:58 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.262 08:33:58 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.262 08:33:58 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.262 08:33:58 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.262 08:33:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.262 [2024-11-19 08:33:58.221605] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:12:37.262 [2024-11-19 08:33:58.221773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80048 ] 00:12:37.262 [2024-11-19 08:33:58.379739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.262 [2024-11-19 08:33:58.410532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:37.262 08:33:59 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:37.262 08:33:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.863 08:34:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.863 08:34:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.863 [2024-11-19 08:34:05.155379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:43.863 [2024-11-19 08:34:05.157214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.157256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.157275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 [2024-11-19 08:34:05.157291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.157304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.157314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 [2024-11-19 08:34:05.157327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.157337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.157348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 [2024-11-19 08:34:05.157358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.157369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.157378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 08:34:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:43.863 [2024-11-19 08:34:05.554659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:43.863 [2024-11-19 08:34:05.556694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.556765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.556779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 [2024-11-19 08:34:05.556796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.556805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.556816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 [2024-11-19 08:34:05.556824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.556834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.556843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 [2024-11-19 08:34:05.556857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.863 [2024-11-19 08:34:05.556865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.863 [2024-11-19 08:34:05.556876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.863 08:34:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.863 08:34:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.863 08:34:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:43.863 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.130 08:34:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:44.130 08:34:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:44.130 08:34:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.130 08:34:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:56.358 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:56.358 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.359 08:34:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.359 08:34:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.359 08:34:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.359 [2024-11-19 08:34:18.130595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:56.359 08:34:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.359 08:34:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.359 [2024-11-19 08:34:18.132667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.359 [2024-11-19 08:34:18.132708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.359 [2024-11-19 08:34:18.132740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.359 [2024-11-19 08:34:18.132758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.359 [2024-11-19 08:34:18.132771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.359 [2024-11-19 08:34:18.132782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.359 [2024-11-19 08:34:18.132795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.359 [2024-11-19 08:34:18.132804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.359 [2024-11-19 08:34:18.132819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.359 [2024-11-19 08:34:18.132828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.359 [2024-11-19 08:34:18.132840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.359 [2024-11-19 08:34:18.132851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.359 08:34:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:56.359 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:56.929 [2024-11-19 08:34:18.529857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:56.929 [2024-11-19 08:34:18.531707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.929 [2024-11-19 08:34:18.531757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.929 [2024-11-19 08:34:18.531771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.929 [2024-11-19 08:34:18.531789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.929 [2024-11-19 08:34:18.531798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.929 [2024-11-19 08:34:18.531809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.929 [2024-11-19 08:34:18.531818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.929 [2024-11-19 08:34:18.531828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.929 [2024-11-19 08:34:18.531837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.929 [2024-11-19 08:34:18.531846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.929 [2024-11-19 08:34:18.531855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.929 [2024-11-19 08:34:18.531867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.929 08:34:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.929 08:34:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.929 08:34:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:56.929 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:57.194 08:34:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:57.194 08:34:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:57.194 08:34:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:57.194 08:34:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.431 08:34:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.431 08:34:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.431 08:34:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.431 [2024-11-19 08:34:31.105755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:09.431 [2024-11-19 08:34:31.107877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.431 [2024-11-19 08:34:31.107915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.431 [2024-11-19 08:34:31.107940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.431 [2024-11-19 08:34:31.107955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.431 [2024-11-19 08:34:31.107967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.431 [2024-11-19 08:34:31.107976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.431 [2024-11-19 08:34:31.107986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.431 [2024-11-19 08:34:31.107995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.431 [2024-11-19 08:34:31.108007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.431 [2024-11-19 08:34:31.108015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.431 [2024-11-19 08:34:31.108025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.431 [2024-11-19 08:34:31.108034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.431 08:34:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.431 08:34:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.431 08:34:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:09.431 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:09.691 [2024-11-19 08:34:31.505017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:09.691 [2024-11-19 08:34:31.507027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.691 [2024-11-19 08:34:31.507073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.691 [2024-11-19 08:34:31.507090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.691 [2024-11-19 08:34:31.507109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.691 [2024-11-19 08:34:31.507119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.691 [2024-11-19 08:34:31.507134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.691 [2024-11-19 08:34:31.507144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.691 [2024-11-19 08:34:31.507158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.691 [2024-11-19 08:34:31.507168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.691 [2024-11-19 08:34:31.507179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.691 [2024-11-19 08:34:31.507189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.691 [2024-11-19 08:34:31.507201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.950 08:34:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.950 08:34:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.950 08:34:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.950 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:10.209 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:10.209 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.209 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.209 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.209 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:10.209 08:34:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:10.209 08:34:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.209 08:34:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.98 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.98 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.98 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.98 2 00:13:22.423 remove_attach_helper took 44.98s to complete (handling 2 nvme drive(s)) 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:22.423 08:34:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:22.423 08:34:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:29.001 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.002 08:34:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.002 08:34:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 [2024-11-19 08:34:50.168611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:29.002 [2024-11-19 08:34:50.170002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.170036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 [2024-11-19 08:34:50.170094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.170110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.170136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 [2024-11-19 08:34:50.170147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.170157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.170171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 [2024-11-19 08:34:50.170180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.170192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.170201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 08:34:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:29.002 [2024-11-19 08:34:50.567858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:29.002 [2024-11-19 08:34:50.569152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.569197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.569210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 [2024-11-19 08:34:50.569226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.569235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.569246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 [2024-11-19 08:34:50.569255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.569265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.569274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 [2024-11-19 08:34:50.569284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.002 [2024-11-19 08:34:50.569293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.002 [2024-11-19 08:34:50.569306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.002 08:34:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.002 08:34:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 08:34:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:29.002 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:29.262 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:29.262 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:29.262 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:29.262 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:29.262 08:34:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:29.262 08:34:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:29.262 08:34:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:29.262 08:34:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.490 08:35:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.490 08:35:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.490 08:35:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.490 08:35:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.490 08:35:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.490 [2024-11-19 08:35:03.143824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:41.490 [2024-11-19 08:35:03.145080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.490 [2024-11-19 08:35:03.145106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.490 [2024-11-19 08:35:03.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.490 [2024-11-19 08:35:03.145153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.490 [2024-11-19 08:35:03.145166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.490 [2024-11-19 08:35:03.145176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.490 [2024-11-19 08:35:03.145187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.490 [2024-11-19 08:35:03.145197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.490 [2024-11-19 08:35:03.145208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.490 [2024-11-19 08:35:03.145218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.490 [2024-11-19 08:35:03.145228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.490 [2024-11-19 08:35:03.145238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.490 08:35:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:41.490 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:42.064 08:35:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.064 08:35:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:42.064 08:35:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.064 [2024-11-19 08:35:03.742674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:42.064 [2024-11-19 08:35:03.743900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:42.064 [2024-11-19 08:35:03.743942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.064 [2024-11-19 08:35:03.743956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.064 [2024-11-19 08:35:03.743974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:42.064 [2024-11-19 08:35:03.743982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.064 [2024-11-19 08:35:03.743993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.064 [2024-11-19 08:35:03.744002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:42.064 [2024-11-19 08:35:03.744013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.064 [2024-11-19 08:35:03.744021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.064 [2024-11-19 08:35:03.744042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:42.064 [2024-11-19 08:35:03.744051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.064 [2024-11-19 08:35:03.744078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:42.064 08:35:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:42.632 08:35:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.632 08:35:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:42.632 08:35:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:42.632 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:42.892 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:42.892 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:42.892 08:35:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:55.109 08:35:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.109 08:35:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:55.109 08:35:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:55.109 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:55.109 08:35:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.109 08:35:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:55.109 08:35:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.109 [2024-11-19 08:35:16.717866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:55.109 [2024-11-19 08:35:16.719085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.109 [2024-11-19 08:35:16.719117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.109 [2024-11-19 08:35:16.719133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.109 [2024-11-19 08:35:16.719148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.109 [2024-11-19 08:35:16.719161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.109 [2024-11-19 08:35:16.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.109 [2024-11-19 08:35:16.719180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.110 [2024-11-19 08:35:16.719188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.110 [2024-11-19 08:35:16.719198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.110 [2024-11-19 08:35:16.719206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.110 [2024-11-19 08:35:16.719215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.110 [2024-11-19 08:35:16.719223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.110 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:55.110 08:35:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:55.387 [2024-11-19 08:35:17.117096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:55.387 [2024-11-19 08:35:17.118188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.387 [2024-11-19 08:35:17.118228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.387 [2024-11-19 08:35:17.118241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.387 [2024-11-19 08:35:17.118257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.387 [2024-11-19 08:35:17.118266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.387 [2024-11-19 08:35:17.118276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.387 [2024-11-19 08:35:17.118284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.387 [2024-11-19 08:35:17.118297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.387 [2024-11-19 08:35:17.118305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.387 [2024-11-19 08:35:17.118314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:55.387 [2024-11-19 08:35:17.118322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:55.387 [2024-11-19 08:35:17.118332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:55.387 08:35:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.387 08:35:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:55.387 08:35:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:55.387 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:55.651 08:35:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.49 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.49 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.49 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.49 2 00:14:07.867 remove_attach_helper took 45.49s to complete (handling 2 nvme drive(s)) 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:07.867 08:35:29 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80048 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80048 ']' 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80048 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80048 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.867 killing process with pid 80048 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80048' 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80048 00:14:07.867 08:35:29 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80048 00:14:08.435 08:35:30 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:08.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.002 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.002 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.002 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:09.259 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:09.259 00:14:09.259 real 2m30.628s 00:14:09.259 user 1m49.884s 00:14:09.259 sys 0m20.589s 00:14:09.259 08:35:31 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.259 08:35:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:09.259 ************************************ 00:14:09.259 END TEST sw_hotplug 00:14:09.259 ************************************ 00:14:09.259 08:35:31 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:09.259 08:35:31 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:09.259 08:35:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.259 08:35:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.259 08:35:31 -- common/autotest_common.sh@10 -- # set +x 00:14:09.259 ************************************ 00:14:09.259 START TEST nvme_xnvme 00:14:09.259 ************************************ 00:14:09.259 08:35:31 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:09.259 * Looking for test storage... 00:14:09.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:09.259 08:35:31 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.259 08:35:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.259 08:35:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.259 08:35:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.259 08:35:31 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:09.517 08:35:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:09.517 08:35:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.518 --rc genhtml_branch_coverage=1 00:14:09.518 --rc genhtml_function_coverage=1 00:14:09.518 --rc genhtml_legend=1 00:14:09.518 --rc geninfo_all_blocks=1 00:14:09.518 --rc geninfo_unexecuted_blocks=1 00:14:09.518 00:14:09.518 ' 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.518 --rc genhtml_branch_coverage=1 00:14:09.518 --rc genhtml_function_coverage=1 00:14:09.518 --rc genhtml_legend=1 00:14:09.518 --rc geninfo_all_blocks=1 00:14:09.518 --rc geninfo_unexecuted_blocks=1 00:14:09.518 00:14:09.518 ' 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.518 --rc genhtml_branch_coverage=1 00:14:09.518 --rc genhtml_function_coverage=1 00:14:09.518 --rc genhtml_legend=1 00:14:09.518 --rc geninfo_all_blocks=1 00:14:09.518 --rc geninfo_unexecuted_blocks=1 00:14:09.518 00:14:09.518 ' 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.518 --rc genhtml_branch_coverage=1 00:14:09.518 --rc genhtml_function_coverage=1 00:14:09.518 --rc genhtml_legend=1 00:14:09.518 --rc geninfo_all_blocks=1 00:14:09.518 --rc geninfo_unexecuted_blocks=1 00:14:09.518 00:14:09.518 ' 00:14:09.518 08:35:31 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.518 08:35:31 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.518 08:35:31 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 08:35:31 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 08:35:31 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 08:35:31 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:09.518 08:35:31 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 08:35:31 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.518 08:35:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.518 ************************************ 00:14:09.518 START TEST xnvme_to_malloc_dd_copy 00:14:09.518 ************************************ 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:09.518 08:35:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:09.518 { 00:14:09.518 "subsystems": [ 00:14:09.518 { 00:14:09.518 "subsystem": "bdev", 00:14:09.518 "config": [ 00:14:09.518 { 00:14:09.518 "params": { 00:14:09.518 "block_size": 512, 00:14:09.518 "num_blocks": 2097152, 00:14:09.518 "name": "malloc0" 00:14:09.518 }, 00:14:09.518 "method": "bdev_malloc_create" 00:14:09.518 }, 00:14:09.518 { 00:14:09.518 "params": { 00:14:09.518 "io_mechanism": "libaio", 00:14:09.518 "filename": "/dev/nullb0", 00:14:09.518 "name": "null0" 00:14:09.518 }, 00:14:09.518 "method": "bdev_xnvme_create" 00:14:09.518 }, 00:14:09.518 { 00:14:09.518 "method": "bdev_wait_for_examine" 00:14:09.518 } 00:14:09.518 ] 00:14:09.518 } 00:14:09.518 ] 00:14:09.518 } 00:14:09.518 [2024-11-19 08:35:31.277343] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:09.518 [2024-11-19 08:35:31.277473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81401 ] 00:14:09.518 [2024-11-19 08:35:31.419161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.776 [2024-11-19 08:35:31.449499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.153  [2024-11-19T08:35:34.030Z] Copying: 275/1024 [MB] (275 MBps) [2024-11-19T08:35:34.967Z] Copying: 549/1024 [MB] (274 MBps) [2024-11-19T08:35:35.903Z] Copying: 818/1024 [MB] (269 MBps) [2024-11-19T08:35:36.163Z] Copying: 1024/1024 [MB] (average 273 MBps) 00:14:14.256 00:14:14.256 08:35:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:14.256 08:35:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:14.257 08:35:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:14.257 08:35:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:14.257 { 00:14:14.257 "subsystems": [ 00:14:14.257 { 00:14:14.257 "subsystem": "bdev", 00:14:14.257 "config": [ 00:14:14.257 { 00:14:14.257 "params": { 00:14:14.257 "block_size": 512, 00:14:14.257 "num_blocks": 2097152, 00:14:14.257 "name": "malloc0" 00:14:14.257 }, 00:14:14.257 "method": "bdev_malloc_create" 00:14:14.257 }, 00:14:14.257 { 00:14:14.257 "params": { 00:14:14.257 "io_mechanism": "libaio", 00:14:14.257 "filename": "/dev/nullb0", 00:14:14.257 "name": "null0" 00:14:14.257 }, 00:14:14.257 "method": "bdev_xnvme_create" 00:14:14.257 }, 00:14:14.257 { 00:14:14.257 "method": "bdev_wait_for_examine" 00:14:14.257 } 00:14:14.257 ] 00:14:14.257 } 00:14:14.257 ] 00:14:14.257 } 00:14:14.257 [2024-11-19 08:35:36.135634] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:14.257 [2024-11-19 08:35:36.135786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81462 ] 00:14:14.516 [2024-11-19 08:35:36.294096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.516 [2024-11-19 08:35:36.318438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.896  [2024-11-19T08:35:38.753Z] Copying: 274/1024 [MB] (274 MBps) [2024-11-19T08:35:39.705Z] Copying: 554/1024 [MB] (280 MBps) [2024-11-19T08:35:40.643Z] Copying: 829/1024 [MB] (274 MBps) [2024-11-19T08:35:40.903Z] Copying: 1024/1024 [MB] (average 276 MBps) 00:14:18.996 00:14:18.996 08:35:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:18.996 08:35:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:18.996 08:35:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:18.996 08:35:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:18.996 08:35:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:18.996 08:35:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:18.996 { 00:14:18.996 "subsystems": [ 00:14:18.996 { 00:14:18.996 "subsystem": "bdev", 00:14:18.996 "config": [ 00:14:18.996 { 00:14:18.996 "params": { 00:14:18.996 "block_size": 512, 00:14:18.996 "num_blocks": 2097152, 00:14:18.996 "name": "malloc0" 00:14:18.996 }, 00:14:18.996 "method": "bdev_malloc_create" 00:14:18.996 }, 00:14:18.996 { 00:14:18.996 "params": { 00:14:18.996 "io_mechanism": "io_uring", 00:14:18.996 "filename": "/dev/nullb0", 00:14:18.996 "name": "null0" 00:14:18.996 }, 00:14:18.996 "method": "bdev_xnvme_create" 00:14:18.996 }, 00:14:18.996 { 00:14:18.996 "method": "bdev_wait_for_examine" 00:14:18.996 } 00:14:18.996 ] 00:14:18.996 } 00:14:18.996 ] 00:14:18.996 } 00:14:19.256 [2024-11-19 08:35:40.933660] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:19.256 [2024-11-19 08:35:40.933808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81521 ] 00:14:19.256 [2024-11-19 08:35:41.090212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.256 [2024-11-19 08:35:41.115355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.637  [2024-11-19T08:35:43.482Z] Copying: 279/1024 [MB] (279 MBps) [2024-11-19T08:35:44.420Z] Copying: 560/1024 [MB] (281 MBps) [2024-11-19T08:35:45.400Z] Copying: 839/1024 [MB] (279 MBps) [2024-11-19T08:35:45.660Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:14:23.754 00:14:23.754 08:35:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:23.754 08:35:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:23.754 08:35:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:23.754 08:35:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:23.754 { 00:14:23.754 "subsystems": [ 00:14:23.754 { 00:14:23.754 "subsystem": "bdev", 00:14:23.754 "config": [ 00:14:23.754 { 00:14:23.754 "params": { 00:14:23.754 "block_size": 512, 00:14:23.754 "num_blocks": 2097152, 00:14:23.754 "name": "malloc0" 00:14:23.754 }, 00:14:23.754 "method": "bdev_malloc_create" 00:14:23.754 }, 00:14:23.754 { 00:14:23.754 "params": { 00:14:23.754 "io_mechanism": "io_uring", 00:14:23.754 "filename": "/dev/nullb0", 00:14:23.754 "name": "null0" 00:14:23.754 }, 00:14:23.754 "method": "bdev_xnvme_create" 00:14:23.754 }, 00:14:23.754 { 00:14:23.754 "method": "bdev_wait_for_examine" 00:14:23.754 } 00:14:23.754 ] 00:14:23.754 } 00:14:23.754 ] 00:14:23.754 } 00:14:23.754 [2024-11-19 08:35:45.656967] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:23.754 [2024-11-19 08:35:45.657104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81581 ] 00:14:24.014 [2024-11-19 08:35:45.814325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.014 [2024-11-19 08:35:45.839416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.394  [2024-11-19T08:35:48.237Z] Copying: 284/1024 [MB] (284 MBps) [2024-11-19T08:35:49.173Z] Copying: 540/1024 [MB] (256 MBps) [2024-11-19T08:35:50.552Z] Copying: 773/1024 [MB] (233 MBps) [2024-11-19T08:35:50.552Z] Copying: 1008/1024 [MB] (234 MBps) [2024-11-19T08:35:50.811Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:14:28.904 00:14:28.904 08:35:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:28.904 08:35:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:28.904 00:14:28.904 real 0m19.588s 00:14:28.904 user 0m15.954s 00:14:28.904 sys 0m3.183s 00:14:28.904 08:35:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.904 08:35:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:28.904 ************************************ 00:14:28.904 END TEST xnvme_to_malloc_dd_copy 00:14:28.904 ************************************ 00:14:29.196 08:35:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:29.196 08:35:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.196 08:35:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.196 08:35:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.196 ************************************ 00:14:29.196 START TEST xnvme_bdevperf 00:14:29.196 ************************************ 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.196 08:35:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.196 { 00:14:29.196 "subsystems": [ 00:14:29.196 { 00:14:29.196 "subsystem": "bdev", 00:14:29.196 "config": [ 00:14:29.196 { 00:14:29.196 "params": { 00:14:29.196 "io_mechanism": "libaio", 00:14:29.196 "filename": "/dev/nullb0", 00:14:29.196 "name": "null0" 00:14:29.196 }, 00:14:29.196 "method": "bdev_xnvme_create" 00:14:29.196 }, 00:14:29.196 { 00:14:29.196 "method": "bdev_wait_for_examine" 00:14:29.196 } 00:14:29.196 ] 00:14:29.196 } 00:14:29.196 ] 00:14:29.196 } 00:14:29.196 [2024-11-19 08:35:50.958491] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:29.196 [2024-11-19 08:35:50.958626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81669 ] 00:14:29.456 [2024-11-19 08:35:51.117148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.456 [2024-11-19 08:35:51.145530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.456 Running I/O for 5 seconds... 00:14:31.774 148864.00 IOPS, 581.50 MiB/s [2024-11-19T08:35:54.619Z] 163584.00 IOPS, 639.00 MiB/s [2024-11-19T08:35:55.557Z] 170474.67 IOPS, 665.92 MiB/s [2024-11-19T08:35:56.494Z] 173088.00 IOPS, 676.12 MiB/s 00:14:34.587 Latency(us) 00:14:34.587 [2024-11-19T08:35:56.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.587 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:34.587 null0 : 5.00 174370.50 681.13 0.00 0.00 364.60 149.35 2146.38 00:14:34.587 [2024-11-19T08:35:56.494Z] =================================================================================================================== 00:14:34.587 [2024-11-19T08:35:56.494Z] Total : 174370.50 681.13 0.00 0.00 364.60 149.35 2146.38 00:14:34.587 08:35:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:34.587 08:35:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:34.587 08:35:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:34.587 08:35:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:34.587 08:35:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:34.587 08:35:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.846 { 00:14:34.846 "subsystems": [ 00:14:34.846 { 00:14:34.846 "subsystem": "bdev", 00:14:34.846 "config": [ 00:14:34.846 { 00:14:34.846 "params": { 00:14:34.846 "io_mechanism": "io_uring", 00:14:34.846 "filename": "/dev/nullb0", 00:14:34.846 "name": "null0" 00:14:34.846 }, 00:14:34.846 "method": "bdev_xnvme_create" 00:14:34.846 }, 00:14:34.846 { 00:14:34.846 "method": "bdev_wait_for_examine" 00:14:34.846 } 00:14:34.846 ] 00:14:34.846 } 00:14:34.846 ] 00:14:34.846 } 00:14:34.846 [2024-11-19 08:35:56.549697] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:34.846 [2024-11-19 08:35:56.549816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81733 ] 00:14:34.847 [2024-11-19 08:35:56.704646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.847 [2024-11-19 08:35:56.730484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.106 Running I/O for 5 seconds... 00:14:37.027 212800.00 IOPS, 831.25 MiB/s [2024-11-19T08:35:59.870Z] 212864.00 IOPS, 831.50 MiB/s [2024-11-19T08:36:01.245Z] 217088.00 IOPS, 848.00 MiB/s [2024-11-19T08:36:01.813Z] 208864.00 IOPS, 815.88 MiB/s 00:14:39.906 Latency(us) 00:14:39.906 [2024-11-19T08:36:01.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.906 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:39.906 null0 : 5.00 209719.91 819.22 0.00 0.00 302.70 160.98 2060.52 00:14:39.906 [2024-11-19T08:36:01.813Z] =================================================================================================================== 00:14:39.906 [2024-11-19T08:36:01.813Z] Total : 209719.91 819.22 0.00 0.00 302.70 160.98 2060.52 00:14:40.165 08:36:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:40.165 08:36:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:40.165 00:14:40.165 real 0m11.193s 00:14:40.165 user 0m8.627s 00:14:40.165 sys 0m2.391s 00:14:40.165 08:36:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.166 08:36:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.166 ************************************ 00:14:40.166 END TEST xnvme_bdevperf 00:14:40.166 ************************************ 00:14:40.424 00:14:40.424 real 0m31.047s 00:14:40.424 user 0m24.710s 00:14:40.424 sys 0m5.723s 00:14:40.424 08:36:02 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.424 08:36:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.424 ************************************ 00:14:40.424 END TEST nvme_xnvme 00:14:40.424 ************************************ 00:14:40.424 08:36:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:40.424 08:36:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.424 08:36:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.424 08:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.424 ************************************ 00:14:40.424 START TEST blockdev_xnvme 00:14:40.424 ************************************ 00:14:40.424 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:40.424 * Looking for test storage... 00:14:40.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:40.424 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.424 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.424 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:40.683 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.683 08:36:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.684 08:36:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:40.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.684 --rc genhtml_branch_coverage=1 00:14:40.684 --rc genhtml_function_coverage=1 00:14:40.684 --rc genhtml_legend=1 00:14:40.684 --rc geninfo_all_blocks=1 00:14:40.684 --rc geninfo_unexecuted_blocks=1 00:14:40.684 00:14:40.684 ' 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:40.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.684 --rc genhtml_branch_coverage=1 00:14:40.684 --rc genhtml_function_coverage=1 00:14:40.684 --rc genhtml_legend=1 00:14:40.684 --rc geninfo_all_blocks=1 00:14:40.684 --rc geninfo_unexecuted_blocks=1 00:14:40.684 00:14:40.684 ' 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:40.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.684 --rc genhtml_branch_coverage=1 00:14:40.684 --rc genhtml_function_coverage=1 00:14:40.684 --rc genhtml_legend=1 00:14:40.684 --rc geninfo_all_blocks=1 00:14:40.684 --rc geninfo_unexecuted_blocks=1 00:14:40.684 00:14:40.684 ' 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:40.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.684 --rc genhtml_branch_coverage=1 00:14:40.684 --rc genhtml_function_coverage=1 00:14:40.684 --rc genhtml_legend=1 00:14:40.684 --rc geninfo_all_blocks=1 00:14:40.684 --rc geninfo_unexecuted_blocks=1 00:14:40.684 00:14:40.684 ' 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81871 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:40.684 08:36:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81871 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 81871 ']' 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.684 08:36:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 [2024-11-19 08:36:02.490869] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:40.684 [2024-11-19 08:36:02.491014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81871 ] 00:14:40.943 [2024-11-19 08:36:02.648131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.943 [2024-11-19 08:36:02.672827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.511 08:36:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.511 08:36:03 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:14:41.511 08:36:03 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:41.511 08:36:03 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:41.511 08:36:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:41.511 08:36:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:41.511 08:36:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:42.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:42.413 Waiting for block devices as requested 00:14:42.413 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.413 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.672 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.672 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:47.948 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:47.948 08:36:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.948 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:47.949 nvme0n1 00:14:47.949 nvme1n1 00:14:47.949 nvme2n1 00:14:47.949 nvme2n2 00:14:47.949 nvme2n3 00:14:47.949 nvme3n1 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "88fbcbf1-52f2-4548-9dbf-248ffa61e7c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "88fbcbf1-52f2-4548-9dbf-248ffa61e7c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "90bd5967-78b0-44bf-96cc-191a860258ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "90bd5967-78b0-44bf-96cc-191a860258ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b9535d97-5a02-427f-bcf0-fc85cad58b59"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b9535d97-5a02-427f-bcf0-fc85cad58b59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4b88b0aa-8cba-4418-a50d-4a4ce9008387"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b88b0aa-8cba-4418-a50d-4a4ce9008387",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9632dfe5-1ecb-47ec-9c50-f10b694e2567"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9632dfe5-1ecb-47ec-9c50-f10b694e2567",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b6faf9ea-9ee1-4c61-896b-14367216f7ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b6faf9ea-9ee1-4c61-896b-14367216f7ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:47.949 08:36:09 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81871 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 81871 ']' 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 81871 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81871 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.949 killing process with pid 81871 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81871' 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 81871 00:14:47.949 08:36:09 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 81871 00:14:48.518 08:36:10 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:48.518 08:36:10 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:48.518 08:36:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:48.518 08:36:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.518 08:36:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.518 ************************************ 00:14:48.518 START TEST bdev_hello_world 00:14:48.518 ************************************ 00:14:48.518 08:36:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:48.518 [2024-11-19 08:36:10.258141] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:48.518 [2024-11-19 08:36:10.258309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82226 ] 00:14:48.518 [2024-11-19 08:36:10.419626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.778 [2024-11-19 08:36:10.451054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.778 [2024-11-19 08:36:10.638132] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:48.778 [2024-11-19 08:36:10.638187] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:48.778 [2024-11-19 08:36:10.638231] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:48.778 [2024-11-19 08:36:10.640519] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:48.778 [2024-11-19 08:36:10.640784] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:48.778 [2024-11-19 08:36:10.640811] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:48.778 [2024-11-19 08:36:10.640979] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:48.778 00:14:48.778 [2024-11-19 08:36:10.641011] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:49.037 00:14:49.037 real 0m0.672s 00:14:49.037 user 0m0.369s 00:14:49.037 sys 0m0.190s 00:14:49.037 08:36:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.037 08:36:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:49.037 ************************************ 00:14:49.037 END TEST bdev_hello_world 00:14:49.037 ************************************ 00:14:49.037 08:36:10 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:49.037 08:36:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:49.037 08:36:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.037 08:36:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.037 ************************************ 00:14:49.037 START TEST bdev_bounds 00:14:49.037 ************************************ 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82251 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:49.037 Process bdevio pid: 82251 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82251' 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82251 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 82251 ']' 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.037 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.038 08:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:49.038 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.038 08:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:49.297 [2024-11-19 08:36:10.985689] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:49.297 [2024-11-19 08:36:10.985826] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82251 ] 00:14:49.297 [2024-11-19 08:36:11.144288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:49.297 [2024-11-19 08:36:11.172439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.297 [2024-11-19 08:36:11.172540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.297 [2024-11-19 08:36:11.172653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.232 08:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.232 08:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:50.232 08:36:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:50.232 I/O targets: 00:14:50.232 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:50.232 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:50.232 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:50.232 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:50.232 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:50.232 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:50.232 00:14:50.232 00:14:50.232 CUnit - A unit testing framework for C - Version 2.1-3 00:14:50.232 http://cunit.sourceforge.net/ 00:14:50.232 00:14:50.232 00:14:50.232 Suite: bdevio tests on: nvme3n1 00:14:50.232 Test: blockdev write read block ...passed 00:14:50.232 Test: blockdev write zeroes read block ...passed 00:14:50.232 Test: blockdev write zeroes read no split ...passed 00:14:50.232 Test: blockdev write zeroes read split ...passed 00:14:50.232 Test: blockdev write zeroes read split partial ...passed 00:14:50.232 Test: blockdev reset ...passed 00:14:50.232 Test: blockdev write read 8 blocks ...passed 00:14:50.232 Test: blockdev write read size > 128k ...passed 00:14:50.232 Test: blockdev write read invalid size ...passed 00:14:50.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.232 Test: blockdev write read max offset ...passed 00:14:50.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.232 Test: blockdev writev readv 8 blocks ...passed 00:14:50.232 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.232 Test: blockdev writev readv block ...passed 00:14:50.232 Test: blockdev writev readv size > 128k ...passed 00:14:50.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.232 Test: blockdev comparev and writev ...passed 00:14:50.232 Test: blockdev nvme passthru rw ...passed 00:14:50.233 Test: blockdev nvme passthru vendor specific ...passed 00:14:50.233 Test: blockdev nvme admin passthru ...passed 00:14:50.233 Test: blockdev copy ...passed 00:14:50.233 Suite: bdevio tests on: nvme2n3 00:14:50.233 Test: blockdev write read block ...passed 00:14:50.233 Test: blockdev write zeroes read block ...passed 00:14:50.233 Test: blockdev write zeroes read no split ...passed 00:14:50.233 Test: blockdev write zeroes read split ...passed 00:14:50.233 Test: blockdev write zeroes read split partial ...passed 00:14:50.233 Test: blockdev reset ...passed 00:14:50.233 Test: blockdev write read 8 blocks ...passed 00:14:50.233 Test: blockdev write read size > 128k ...passed 00:14:50.233 Test: blockdev write read invalid size ...passed 00:14:50.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.233 Test: blockdev write read max offset ...passed 00:14:50.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.233 Test: blockdev writev readv 8 blocks ...passed 00:14:50.233 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.233 Test: blockdev writev readv block ...passed 00:14:50.233 Test: blockdev writev readv size > 128k ...passed 00:14:50.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.233 Test: blockdev comparev and writev ...passed 00:14:50.233 Test: blockdev nvme passthru rw ...passed 00:14:50.233 Test: blockdev nvme passthru vendor specific ...passed 00:14:50.233 Test: blockdev nvme admin passthru ...passed 00:14:50.233 Test: blockdev copy ...passed 00:14:50.233 Suite: bdevio tests on: nvme2n2 00:14:50.233 Test: blockdev write read block ...passed 00:14:50.233 Test: blockdev write zeroes read block ...passed 00:14:50.233 Test: blockdev write zeroes read no split ...passed 00:14:50.233 Test: blockdev write zeroes read split ...passed 00:14:50.233 Test: blockdev write zeroes read split partial ...passed 00:14:50.233 Test: blockdev reset ...passed 00:14:50.233 Test: blockdev write read 8 blocks ...passed 00:14:50.233 Test: blockdev write read size > 128k ...passed 00:14:50.233 Test: blockdev write read invalid size ...passed 00:14:50.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.233 Test: blockdev write read max offset ...passed 00:14:50.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.233 Test: blockdev writev readv 8 blocks ...passed 00:14:50.233 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.233 Test: blockdev writev readv block ...passed 00:14:50.233 Test: blockdev writev readv size > 128k ...passed 00:14:50.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.233 Test: blockdev comparev and writev ...passed 00:14:50.233 Test: blockdev nvme passthru rw ...passed 00:14:50.233 Test: blockdev nvme passthru vendor specific ...passed 00:14:50.233 Test: blockdev nvme admin passthru ...passed 00:14:50.233 Test: blockdev copy ...passed 00:14:50.233 Suite: bdevio tests on: nvme2n1 00:14:50.233 Test: blockdev write read block ...passed 00:14:50.233 Test: blockdev write zeroes read block ...passed 00:14:50.233 Test: blockdev write zeroes read no split ...passed 00:14:50.233 Test: blockdev write zeroes read split ...passed 00:14:50.233 Test: blockdev write zeroes read split partial ...passed 00:14:50.233 Test: blockdev reset ...passed 00:14:50.233 Test: blockdev write read 8 blocks ...passed 00:14:50.233 Test: blockdev write read size > 128k ...passed 00:14:50.233 Test: blockdev write read invalid size ...passed 00:14:50.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.233 Test: blockdev write read max offset ...passed 00:14:50.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.233 Test: blockdev writev readv 8 blocks ...passed 00:14:50.233 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.233 Test: blockdev writev readv block ...passed 00:14:50.233 Test: blockdev writev readv size > 128k ...passed 00:14:50.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.233 Test: blockdev comparev and writev ...passed 00:14:50.233 Test: blockdev nvme passthru rw ...passed 00:14:50.233 Test: blockdev nvme passthru vendor specific ...passed 00:14:50.233 Test: blockdev nvme admin passthru ...passed 00:14:50.233 Test: blockdev copy ...passed 00:14:50.233 Suite: bdevio tests on: nvme1n1 00:14:50.233 Test: blockdev write read block ...passed 00:14:50.233 Test: blockdev write zeroes read block ...passed 00:14:50.233 Test: blockdev write zeroes read no split ...passed 00:14:50.233 Test: blockdev write zeroes read split ...passed 00:14:50.233 Test: blockdev write zeroes read split partial ...passed 00:14:50.233 Test: blockdev reset ...passed 00:14:50.233 Test: blockdev write read 8 blocks ...passed 00:14:50.233 Test: blockdev write read size > 128k ...passed 00:14:50.233 Test: blockdev write read invalid size ...passed 00:14:50.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.233 Test: blockdev write read max offset ...passed 00:14:50.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.233 Test: blockdev writev readv 8 blocks ...passed 00:14:50.233 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.233 Test: blockdev writev readv block ...passed 00:14:50.233 Test: blockdev writev readv size > 128k ...passed 00:14:50.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.233 Test: blockdev comparev and writev ...passed 00:14:50.233 Test: blockdev nvme passthru rw ...passed 00:14:50.233 Test: blockdev nvme passthru vendor specific ...passed 00:14:50.233 Test: blockdev nvme admin passthru ...passed 00:14:50.233 Test: blockdev copy ...passed 00:14:50.233 Suite: bdevio tests on: nvme0n1 00:14:50.233 Test: blockdev write read block ...passed 00:14:50.233 Test: blockdev write zeroes read block ...passed 00:14:50.233 Test: blockdev write zeroes read no split ...passed 00:14:50.233 Test: blockdev write zeroes read split ...passed 00:14:50.233 Test: blockdev write zeroes read split partial ...passed 00:14:50.233 Test: blockdev reset ...passed 00:14:50.233 Test: blockdev write read 8 blocks ...passed 00:14:50.233 Test: blockdev write read size > 128k ...passed 00:14:50.233 Test: blockdev write read invalid size ...passed 00:14:50.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.233 Test: blockdev write read max offset ...passed 00:14:50.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.233 Test: blockdev writev readv 8 blocks ...passed 00:14:50.233 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.233 Test: blockdev writev readv block ...passed 00:14:50.233 Test: blockdev writev readv size > 128k ...passed 00:14:50.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.233 Test: blockdev comparev and writev ...passed 00:14:50.233 Test: blockdev nvme passthru rw ...passed 00:14:50.233 Test: blockdev nvme passthru vendor specific ...passed 00:14:50.233 Test: blockdev nvme admin passthru ...passed 00:14:50.233 Test: blockdev copy ...passed 00:14:50.233 00:14:50.233 Run Summary: Type Total Ran Passed Failed Inactive 00:14:50.233 suites 6 6 n/a 0 0 00:14:50.233 tests 138 138 138 0 0 00:14:50.233 asserts 780 780 780 0 n/a 00:14:50.233 00:14:50.233 Elapsed time = 0.340 seconds 00:14:50.492 0 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82251 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 82251 ']' 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 82251 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82251 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.492 killing process with pid 82251 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82251' 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 82251 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 82251 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:50.492 00:14:50.492 real 0m1.485s 00:14:50.492 user 0m3.759s 00:14:50.492 sys 0m0.348s 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.492 08:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:50.492 ************************************ 00:14:50.492 END TEST bdev_bounds 00:14:50.492 ************************************ 00:14:50.752 08:36:12 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:50.752 08:36:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:50.752 08:36:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.752 08:36:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.752 ************************************ 00:14:50.752 START TEST bdev_nbd 00:14:50.752 ************************************ 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82300 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82300 /var/tmp/spdk-nbd.sock 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 82300 ']' 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.752 08:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:50.752 [2024-11-19 08:36:12.549830] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:50.752 [2024-11-19 08:36:12.549972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.012 [2024-11-19 08:36:12.706272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.012 [2024-11-19 08:36:12.732301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:51.582 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.842 1+0 records in 00:14:51.842 1+0 records out 00:14:51.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664984 s, 6.2 MB/s 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:51.842 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.102 1+0 records in 00:14:52.102 1+0 records out 00:14:52.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780343 s, 5.2 MB/s 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:52.102 08:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.361 1+0 records in 00:14:52.361 1+0 records out 00:14:52.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787519 s, 5.2 MB/s 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:52.361 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.620 1+0 records in 00:14:52.620 1+0 records out 00:14:52.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659099 s, 6.2 MB/s 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:52.620 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.879 1+0 records in 00:14:52.879 1+0 records out 00:14:52.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659315 s, 6.2 MB/s 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:52.879 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.139 1+0 records in 00:14:53.139 1+0 records out 00:14:53.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717935 s, 5.7 MB/s 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:53.139 08:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:53.399 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd0", 00:14:53.399 "bdev_name": "nvme0n1" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd1", 00:14:53.399 "bdev_name": "nvme1n1" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd2", 00:14:53.399 "bdev_name": "nvme2n1" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd3", 00:14:53.399 "bdev_name": "nvme2n2" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd4", 00:14:53.399 "bdev_name": "nvme2n3" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd5", 00:14:53.399 "bdev_name": "nvme3n1" 00:14:53.399 } 00:14:53.399 ]' 00:14:53.399 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:53.399 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd0", 00:14:53.399 "bdev_name": "nvme0n1" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd1", 00:14:53.399 "bdev_name": "nvme1n1" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd2", 00:14:53.399 "bdev_name": "nvme2n1" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd3", 00:14:53.399 "bdev_name": "nvme2n2" 00:14:53.399 }, 00:14:53.399 { 00:14:53.399 "nbd_device": "/dev/nbd4", 00:14:53.399 "bdev_name": "nvme2n3" 00:14:53.400 }, 00:14:53.400 { 00:14:53.400 "nbd_device": "/dev/nbd5", 00:14:53.400 "bdev_name": "nvme3n1" 00:14:53.400 } 00:14:53.400 ]' 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.400 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.659 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.919 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.241 08:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:54.498 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.499 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.499 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.756 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.014 08:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:55.274 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:55.532 /dev/nbd0 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.532 1+0 records in 00:14:55.532 1+0 records out 00:14:55.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004407 s, 9.3 MB/s 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:55.532 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:55.790 /dev/nbd1 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.790 1+0 records in 00:14:55.790 1+0 records out 00:14:55.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104916 s, 3.9 MB/s 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:55.790 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:56.048 /dev/nbd10 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.305 1+0 records in 00:14:56.305 1+0 records out 00:14:56.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526854 s, 7.8 MB/s 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:56.305 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.306 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.306 08:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:56.306 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.306 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:56.306 08:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:56.563 /dev/nbd11 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.563 1+0 records in 00:14:56.563 1+0 records out 00:14:56.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547344 s, 7.5 MB/s 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:56.563 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:56.820 /dev/nbd12 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.821 1+0 records in 00:14:56.821 1+0 records out 00:14:56.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486227 s, 8.4 MB/s 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:56.821 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:57.079 /dev/nbd13 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.079 1+0 records in 00:14:57.079 1+0 records out 00:14:57.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654976 s, 6.3 MB/s 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:57.079 08:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd0", 00:14:57.339 "bdev_name": "nvme0n1" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd1", 00:14:57.339 "bdev_name": "nvme1n1" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd10", 00:14:57.339 "bdev_name": "nvme2n1" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd11", 00:14:57.339 "bdev_name": "nvme2n2" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd12", 00:14:57.339 "bdev_name": "nvme2n3" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd13", 00:14:57.339 "bdev_name": "nvme3n1" 00:14:57.339 } 00:14:57.339 ]' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd0", 00:14:57.339 "bdev_name": "nvme0n1" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd1", 00:14:57.339 "bdev_name": "nvme1n1" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd10", 00:14:57.339 "bdev_name": "nvme2n1" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd11", 00:14:57.339 "bdev_name": "nvme2n2" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd12", 00:14:57.339 "bdev_name": "nvme2n3" 00:14:57.339 }, 00:14:57.339 { 00:14:57.339 "nbd_device": "/dev/nbd13", 00:14:57.339 "bdev_name": "nvme3n1" 00:14:57.339 } 00:14:57.339 ]' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:57.339 /dev/nbd1 00:14:57.339 /dev/nbd10 00:14:57.339 /dev/nbd11 00:14:57.339 /dev/nbd12 00:14:57.339 /dev/nbd13' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:57.339 /dev/nbd1 00:14:57.339 /dev/nbd10 00:14:57.339 /dev/nbd11 00:14:57.339 /dev/nbd12 00:14:57.339 /dev/nbd13' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:57.339 256+0 records in 00:14:57.339 256+0 records out 00:14:57.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113676 s, 92.2 MB/s 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:57.339 256+0 records in 00:14:57.339 256+0 records out 00:14:57.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.087816 s, 11.9 MB/s 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:57.339 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:57.598 256+0 records in 00:14:57.598 256+0 records out 00:14:57.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112344 s, 9.3 MB/s 00:14:57.598 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:57.598 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:57.598 256+0 records in 00:14:57.598 256+0 records out 00:14:57.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0767162 s, 13.7 MB/s 00:14:57.598 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:57.598 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:57.858 256+0 records in 00:14:57.858 256+0 records out 00:14:57.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0945057 s, 11.1 MB/s 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:57.858 256+0 records in 00:14:57.858 256+0 records out 00:14:57.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0979561 s, 10.7 MB/s 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:57.858 256+0 records in 00:14:57.858 256+0 records out 00:14:57.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0930463 s, 11.3 MB/s 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:57.858 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:58.118 08:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.118 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.378 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.638 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.898 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.158 08:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.158 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:59.418 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:59.678 malloc_lvol_verify 00:14:59.678 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:59.938 fac8cc22-3799-4977-bf7f-a97cd9a86d6e 00:14:59.938 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:00.197 c56dcd24-f9e9-4be7-a807-fbbf5d35d139 00:15:00.197 08:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:00.485 /dev/nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:00.485 mke2fs 1.47.0 (5-Feb-2023) 00:15:00.485 Discarding device blocks: 0/4096 done 00:15:00.485 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:00.485 00:15:00.485 Allocating group tables: 0/1 done 00:15:00.485 Writing inode tables: 0/1 done 00:15:00.485 Creating journal (1024 blocks): done 00:15:00.485 Writing superblocks and filesystem accounting information: 0/1 done 00:15:00.485 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.485 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82300 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 82300 ']' 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 82300 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82300 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.775 killing process with pid 82300 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82300' 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 82300 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 82300 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:00.775 00:15:00.775 real 0m10.176s 00:15:00.775 user 0m14.089s 00:15:00.775 sys 0m4.297s 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.775 08:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:00.775 ************************************ 00:15:00.775 END TEST bdev_nbd 00:15:00.775 ************************************ 00:15:00.775 08:36:22 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:00.775 08:36:22 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:00.775 08:36:22 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:00.775 08:36:22 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:00.775 08:36:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.775 08:36:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.775 08:36:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.035 ************************************ 00:15:01.035 START TEST bdev_fio 00:15:01.035 ************************************ 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:01.035 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:01.035 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:01.036 ************************************ 00:15:01.036 START TEST bdev_fio_rw_verify 00:15:01.036 ************************************ 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.036 08:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.295 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.296 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.296 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.296 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.296 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.296 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.296 fio-3.35 00:15:01.296 Starting 6 threads 00:15:13.513 00:15:13.513 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82701: Tue Nov 19 08:36:33 2024 00:15:13.513 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(1301MiB/10001msec) 00:15:13.513 slat (usec): min=2, max=765, avg= 8.76, stdev= 6.36 00:15:13.513 clat (usec): min=107, max=5051, avg=452.53, stdev=227.38 00:15:13.513 lat (usec): min=110, max=5086, avg=461.29, stdev=229.13 00:15:13.513 clat percentiles (usec): 00:15:13.513 | 50.000th=[ 412], 99.000th=[ 1090], 99.900th=[ 1565], 99.990th=[ 3752], 00:15:13.513 | 99.999th=[ 4359] 00:15:13.513 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(1313MiB/10001msec); 0 zone resets 00:15:13.513 slat (usec): min=13, max=2401, avg=37.37, stdev=44.60 00:15:13.513 clat (usec): min=71, max=3437, avg=616.26, stdev=283.56 00:15:13.513 lat (usec): min=105, max=3669, avg=653.63, stdev=293.18 00:15:13.513 clat percentiles (usec): 00:15:13.513 | 50.000th=[ 586], 99.000th=[ 1418], 99.900th=[ 1844], 99.990th=[ 2704], 00:15:13.513 | 99.999th=[ 3261] 00:15:13.513 bw ( KiB/s): min=106616, max=162288, per=100.00%, avg=134978.37, stdev=2536.45, samples=114 00:15:13.513 iops : min=26654, max=40572, avg=33744.37, stdev=634.08, samples=114 00:15:13.513 lat (usec) : 100=0.01%, 250=12.52%, 500=39.00%, 750=28.79%, 1000=13.97% 00:15:13.513 lat (msec) : 2=5.65%, 4=0.05%, 10=0.01% 00:15:13.513 cpu : usr=52.25%, sys=27.72%, ctx=9046, majf=0, minf=29445 00:15:13.513 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.513 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.513 issued rwts: total=333033,336237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.513 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:13.513 00:15:13.513 Run status group 0 (all jobs): 00:15:13.513 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1301MiB (1364MB), run=10001-10001msec 00:15:13.513 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1313MiB (1377MB), run=10001-10001msec 00:15:13.513 ----------------------------------------------------- 00:15:13.513 Suppressions used: 00:15:13.513 count bytes template 00:15:13.513 6 48 /usr/src/fio/parse.c 00:15:13.513 2958 283968 /usr/src/fio/iolog.c 00:15:13.513 1 8 libtcmalloc_minimal.so 00:15:13.513 1 904 libcrypto.so 00:15:13.513 ----------------------------------------------------- 00:15:13.513 00:15:13.513 00:15:13.513 real 0m11.277s 00:15:13.513 user 0m32.196s 00:15:13.513 sys 0m17.004s 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:13.513 ************************************ 00:15:13.513 END TEST bdev_fio_rw_verify 00:15:13.513 ************************************ 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:13.513 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "88fbcbf1-52f2-4548-9dbf-248ffa61e7c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "88fbcbf1-52f2-4548-9dbf-248ffa61e7c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "90bd5967-78b0-44bf-96cc-191a860258ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "90bd5967-78b0-44bf-96cc-191a860258ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b9535d97-5a02-427f-bcf0-fc85cad58b59"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b9535d97-5a02-427f-bcf0-fc85cad58b59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4b88b0aa-8cba-4418-a50d-4a4ce9008387"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b88b0aa-8cba-4418-a50d-4a4ce9008387",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9632dfe5-1ecb-47ec-9c50-f10b694e2567"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9632dfe5-1ecb-47ec-9c50-f10b694e2567",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b6faf9ea-9ee1-4c61-896b-14367216f7ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b6faf9ea-9ee1-4c61-896b-14367216f7ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:13.514 /home/vagrant/spdk_repo/spdk 00:15:13.514 ************************************ 00:15:13.514 END TEST bdev_fio 00:15:13.514 ************************************ 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:13.514 00:15:13.514 real 0m11.510s 00:15:13.514 user 0m32.311s 00:15:13.514 sys 0m17.130s 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.514 08:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 08:36:34 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:13.514 08:36:34 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:13.514 08:36:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:13.514 08:36:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.514 08:36:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 ************************************ 00:15:13.514 START TEST bdev_verify 00:15:13.514 ************************************ 00:15:13.514 08:36:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:13.514 [2024-11-19 08:36:34.362961] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:13.514 [2024-11-19 08:36:34.363107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82865 ] 00:15:13.514 [2024-11-19 08:36:34.525181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:13.514 [2024-11-19 08:36:34.554633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.514 [2024-11-19 08:36:34.554801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.514 Running I/O for 5 seconds... 00:15:15.388 24224.00 IOPS, 94.62 MiB/s [2024-11-19T08:36:38.232Z] 23024.00 IOPS, 89.94 MiB/s [2024-11-19T08:36:39.168Z] 23082.67 IOPS, 90.17 MiB/s [2024-11-19T08:36:40.108Z] 22752.00 IOPS, 88.88 MiB/s [2024-11-19T08:36:40.108Z] 22649.60 IOPS, 88.47 MiB/s 00:15:18.201 Latency(us) 00:15:18.201 [2024-11-19T08:36:40.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.201 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x0 length 0xa0000 00:15:18.201 nvme0n1 : 5.04 1550.74 6.06 0.00 0.00 82373.74 15453.90 85168.18 00:15:18.201 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0xa0000 length 0xa0000 00:15:18.201 nvme0n1 : 5.04 1777.59 6.94 0.00 0.00 71909.95 11847.99 76468.21 00:15:18.201 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x0 length 0xbd0bd 00:15:18.201 nvme1n1 : 5.05 2571.19 10.04 0.00 0.00 49435.43 5065.45 63189.30 00:15:18.201 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:18.201 nvme1n1 : 5.05 3046.00 11.90 0.00 0.00 41851.95 3892.09 54260.37 00:15:18.201 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x0 length 0x80000 00:15:18.201 nvme2n1 : 5.06 1568.90 6.13 0.00 0.00 81039.33 12076.94 97073.41 00:15:18.201 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x80000 length 0x80000 00:15:18.201 nvme2n1 : 5.06 1796.46 7.02 0.00 0.00 70832.94 9215.11 65936.66 00:15:18.201 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x0 length 0x80000 00:15:18.201 nvme2n2 : 5.07 1565.39 6.11 0.00 0.00 81044.62 6553.60 76468.21 00:15:18.201 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x80000 length 0x80000 00:15:18.201 nvme2n2 : 5.06 1795.06 7.01 0.00 0.00 70787.35 10073.66 70057.70 00:15:18.201 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x0 length 0x80000 00:15:18.201 nvme2n3 : 5.07 1565.02 6.11 0.00 0.00 80921.86 6324.65 83794.50 00:15:18.201 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x80000 length 0x80000 00:15:18.201 nvme2n3 : 5.06 1794.42 7.01 0.00 0.00 70720.17 12420.36 72347.17 00:15:18.201 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x0 length 0x20000 00:15:18.201 nvme3n1 : 5.07 1564.62 6.11 0.00 0.00 80870.76 6897.02 92952.37 00:15:18.201 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.201 Verification LBA range: start 0x20000 length 0x20000 00:15:18.201 nvme3n1 : 5.06 1797.09 7.02 0.00 0.00 70524.37 10359.84 65020.87 00:15:18.201 [2024-11-19T08:36:40.108Z] =================================================================================================================== 00:15:18.201 [2024-11-19T08:36:40.108Z] Total : 22392.48 87.47 0.00 0.00 68123.92 3892.09 97073.41 00:15:18.461 00:15:18.461 real 0m5.853s 00:15:18.461 user 0m8.930s 00:15:18.461 sys 0m1.821s 00:15:18.461 08:36:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.461 08:36:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:18.461 ************************************ 00:15:18.461 END TEST bdev_verify 00:15:18.461 ************************************ 00:15:18.461 08:36:40 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:18.461 08:36:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:18.461 08:36:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.461 08:36:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.461 ************************************ 00:15:18.461 START TEST bdev_verify_big_io 00:15:18.461 ************************************ 00:15:18.461 08:36:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:18.461 [2024-11-19 08:36:40.276116] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:18.461 [2024-11-19 08:36:40.276328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82956 ] 00:15:18.721 [2024-11-19 08:36:40.436299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:18.721 [2024-11-19 08:36:40.464327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.721 [2024-11-19 08:36:40.464450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.981 Running I/O for 5 seconds... 00:15:24.816 1984.00 IOPS, 124.00 MiB/s [2024-11-19T08:36:46.988Z] 3600.00 IOPS, 225.00 MiB/s [2024-11-19T08:36:46.988Z] 4114.00 IOPS, 257.12 MiB/s 00:15:25.081 Latency(us) 00:15:25.081 [2024-11-19T08:36:46.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.081 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x0 length 0xa000 00:15:25.081 nvme0n1 : 5.63 108.07 6.75 0.00 0.00 1105585.99 114473.36 1846226.39 00:15:25.081 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0xa000 length 0xa000 00:15:25.081 nvme0n1 : 5.61 136.81 8.55 0.00 0.00 911107.20 108520.75 1472585.33 00:15:25.081 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x0 length 0xbd0b 00:15:25.081 nvme1n1 : 5.76 166.68 10.42 0.00 0.00 703747.89 59068.26 1011028.74 00:15:25.081 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:25.081 nvme1n1 : 5.62 222.26 13.89 0.00 0.00 556209.19 16255.22 897471.16 00:15:25.081 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x0 length 0x8000 00:15:25.081 nvme2n1 : 5.80 152.29 9.52 0.00 0.00 738813.67 54718.27 1904836.75 00:15:25.081 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x8000 length 0x8000 00:15:25.081 nvme2n1 : 5.60 182.83 11.43 0.00 0.00 664167.52 62731.40 619071.94 00:15:25.081 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x0 length 0x8000 00:15:25.081 nvme2n2 : 5.86 196.75 12.30 0.00 0.00 552460.62 25413.09 582440.47 00:15:25.081 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x8000 length 0x8000 00:15:25.081 nvme2n2 : 5.60 185.62 11.60 0.00 0.00 643347.12 71889.27 1142902.05 00:15:25.081 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x0 length 0x8000 00:15:25.081 nvme2n3 : 6.05 204.11 12.76 0.00 0.00 509608.14 19231.52 695998.04 00:15:25.081 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x8000 length 0x8000 00:15:25.081 nvme2n3 : 5.61 134.15 8.38 0.00 0.00 874911.38 70515.59 2183235.97 00:15:25.081 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x0 length 0x2000 00:15:25.081 nvme3n1 : 6.23 318.65 19.92 0.00 0.00 319804.54 2031.90 1201512.41 00:15:25.081 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:25.081 Verification LBA range: start 0x2000 length 0x2000 00:15:25.081 nvme3n1 : 5.61 213.91 13.37 0.00 0.00 541789.98 3119.40 728966.37 00:15:25.081 [2024-11-19T08:36:46.988Z] =================================================================================================================== 00:15:25.081 [2024-11-19T08:36:46.988Z] Total : 2222.12 138.88 0.00 0.00 618627.52 2031.90 2183235.97 00:15:25.341 00:15:25.341 real 0m6.980s 00:15:25.341 user 0m12.727s 00:15:25.341 sys 0m0.568s 00:15:25.341 08:36:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.341 08:36:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.341 ************************************ 00:15:25.341 END TEST bdev_verify_big_io 00:15:25.341 ************************************ 00:15:25.341 08:36:47 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:25.341 08:36:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:25.341 08:36:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.341 08:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.341 ************************************ 00:15:25.341 START TEST bdev_write_zeroes 00:15:25.341 ************************************ 00:15:25.341 08:36:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:25.601 [2024-11-19 08:36:47.318846] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:25.601 [2024-11-19 08:36:47.319049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83061 ] 00:15:25.601 [2024-11-19 08:36:47.472526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.601 [2024-11-19 08:36:47.501984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.861 Running I/O for 1 seconds... 00:15:27.241 67520.00 IOPS, 263.75 MiB/s 00:15:27.241 Latency(us) 00:15:27.241 [2024-11-19T08:36:49.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.241 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:27.241 nvme0n1 : 1.02 10925.32 42.68 0.00 0.00 11704.49 6925.64 24955.19 00:15:27.241 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:27.241 nvme1n1 : 1.02 13048.96 50.97 0.00 0.00 9772.40 4321.37 17171.00 00:15:27.241 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:27.242 nvme2n1 : 1.02 10906.60 42.60 0.00 0.00 11660.97 6753.93 24497.30 00:15:27.242 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:27.242 nvme2n2 : 1.02 10897.26 42.57 0.00 0.00 11661.35 6925.64 24726.25 00:15:27.242 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:27.242 nvme2n3 : 1.02 10887.81 42.53 0.00 0.00 11663.66 7211.82 25069.67 00:15:27.242 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:27.242 nvme3n1 : 1.02 10878.52 42.49 0.00 0.00 11667.47 7440.77 25413.09 00:15:27.242 [2024-11-19T08:36:49.149Z] =================================================================================================================== 00:15:27.242 [2024-11-19T08:36:49.149Z] Total : 67544.47 263.85 0.00 0.00 11305.14 4321.37 25413.09 00:15:27.242 00:15:27.242 real 0m1.717s 00:15:27.242 user 0m0.996s 00:15:27.242 sys 0m0.557s 00:15:27.242 ************************************ 00:15:27.242 END TEST bdev_write_zeroes 00:15:27.242 ************************************ 00:15:27.242 08:36:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.242 08:36:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:27.242 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:27.242 08:36:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:27.242 08:36:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.242 08:36:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.242 ************************************ 00:15:27.242 START TEST bdev_json_nonenclosed 00:15:27.242 ************************************ 00:15:27.242 08:36:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:27.242 [2024-11-19 08:36:49.120416] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:27.242 [2024-11-19 08:36:49.120659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83100 ] 00:15:27.500 [2024-11-19 08:36:49.281275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.500 [2024-11-19 08:36:49.309451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.500 [2024-11-19 08:36:49.309646] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:27.500 [2024-11-19 08:36:49.309793] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:27.500 [2024-11-19 08:36:49.309849] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:27.500 00:15:27.500 real 0m0.382s 00:15:27.500 user 0m0.159s 00:15:27.500 sys 0m0.117s 00:15:27.500 08:36:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.500 08:36:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:27.500 ************************************ 00:15:27.500 END TEST bdev_json_nonenclosed 00:15:27.500 ************************************ 00:15:27.759 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:27.759 08:36:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:27.759 08:36:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.759 08:36:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.759 ************************************ 00:15:27.759 START TEST bdev_json_nonarray 00:15:27.759 ************************************ 00:15:27.759 08:36:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:27.759 [2024-11-19 08:36:49.555568] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:27.759 [2024-11-19 08:36:49.555714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83124 ] 00:15:28.019 [2024-11-19 08:36:49.716424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.019 [2024-11-19 08:36:49.744868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.019 [2024-11-19 08:36:49.744997] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:28.019 [2024-11-19 08:36:49.745019] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:28.019 [2024-11-19 08:36:49.745041] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:28.019 ************************************ 00:15:28.019 END TEST bdev_json_nonarray 00:15:28.019 ************************************ 00:15:28.019 00:15:28.019 real 0m0.368s 00:15:28.019 user 0m0.155s 00:15:28.019 sys 0m0.110s 00:15:28.019 08:36:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.019 08:36:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:28.019 08:36:49 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:28.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.900 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.900 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.900 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.441 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.441 00:15:53.441 real 1m12.706s 00:15:53.441 user 1m24.434s 00:15:53.441 sys 1m27.532s 00:15:53.441 08:37:14 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.441 ************************************ 00:15:53.441 END TEST blockdev_xnvme 00:15:53.441 ************************************ 00:15:53.441 08:37:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.441 08:37:14 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:53.441 08:37:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.441 08:37:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.441 08:37:14 -- common/autotest_common.sh@10 -- # set +x 00:15:53.441 ************************************ 00:15:53.441 START TEST ublk 00:15:53.441 ************************************ 00:15:53.441 08:37:14 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:53.441 * Looking for test storage... 00:15:53.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.441 08:37:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.441 08:37:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.441 08:37:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.441 08:37:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.441 08:37:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.441 08:37:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:53.441 08:37:15 ublk -- scripts/common.sh@345 -- # : 1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.441 08:37:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.441 08:37:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@353 -- # local d=1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.441 08:37:15 ublk -- scripts/common.sh@355 -- # echo 1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.441 08:37:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@353 -- # local d=2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.441 08:37:15 ublk -- scripts/common.sh@355 -- # echo 2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.441 08:37:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.441 08:37:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.441 08:37:15 ublk -- scripts/common.sh@368 -- # return 0 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.441 --rc genhtml_branch_coverage=1 00:15:53.441 --rc genhtml_function_coverage=1 00:15:53.441 --rc genhtml_legend=1 00:15:53.441 --rc geninfo_all_blocks=1 00:15:53.441 --rc geninfo_unexecuted_blocks=1 00:15:53.441 00:15:53.441 ' 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.441 --rc genhtml_branch_coverage=1 00:15:53.441 --rc genhtml_function_coverage=1 00:15:53.441 --rc genhtml_legend=1 00:15:53.441 --rc geninfo_all_blocks=1 00:15:53.441 --rc geninfo_unexecuted_blocks=1 00:15:53.441 00:15:53.441 ' 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.441 --rc genhtml_branch_coverage=1 00:15:53.441 --rc genhtml_function_coverage=1 00:15:53.441 --rc genhtml_legend=1 00:15:53.441 --rc geninfo_all_blocks=1 00:15:53.441 --rc geninfo_unexecuted_blocks=1 00:15:53.441 00:15:53.441 ' 00:15:53.441 08:37:15 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.441 --rc genhtml_branch_coverage=1 00:15:53.441 --rc genhtml_function_coverage=1 00:15:53.441 --rc genhtml_legend=1 00:15:53.441 --rc geninfo_all_blocks=1 00:15:53.441 --rc geninfo_unexecuted_blocks=1 00:15:53.441 00:15:53.441 ' 00:15:53.441 08:37:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:53.441 08:37:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:53.441 08:37:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:53.441 08:37:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:53.441 08:37:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:53.441 08:37:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:53.441 08:37:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:53.441 08:37:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:53.442 08:37:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:53.442 08:37:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:53.442 08:37:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.442 08:37:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.442 08:37:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.442 ************************************ 00:15:53.442 START TEST test_save_ublk_config 00:15:53.442 ************************************ 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83499 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83499 00:15:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 83499 ']' 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.442 08:37:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:53.442 [2024-11-19 08:37:15.277885] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:53.442 [2024-11-19 08:37:15.278147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83499 ] 00:15:53.702 [2024-11-19 08:37:15.414529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.702 [2024-11-19 08:37:15.441952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.271 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.271 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:54.271 08:37:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:54.271 08:37:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:54.271 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.271 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:54.271 [2024-11-19 08:37:16.102736] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:54.271 [2024-11-19 08:37:16.103519] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:54.271 malloc0 00:15:54.271 [2024-11-19 08:37:16.133879] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:54.271 [2024-11-19 08:37:16.133976] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:54.271 [2024-11-19 08:37:16.133989] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:54.271 [2024-11-19 08:37:16.134002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.271 [2024-11-19 08:37:16.141915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.271 [2024-11-19 08:37:16.141951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.271 [2024-11-19 08:37:16.148744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.271 [2024-11-19 08:37:16.148853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:54.272 [2024-11-19 08:37:16.165847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.272 0 00:15:54.272 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.272 08:37:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:54.272 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.272 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:54.532 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.532 08:37:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:54.532 "subsystems": [ 00:15:54.532 { 00:15:54.532 "subsystem": "fsdev", 00:15:54.532 "config": [ 00:15:54.532 { 00:15:54.532 "method": "fsdev_set_opts", 00:15:54.532 "params": { 00:15:54.532 "fsdev_io_pool_size": 65535, 00:15:54.532 "fsdev_io_cache_size": 256 00:15:54.532 } 00:15:54.532 } 00:15:54.532 ] 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "subsystem": "keyring", 00:15:54.532 "config": [] 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "subsystem": "iobuf", 00:15:54.532 "config": [ 00:15:54.532 { 00:15:54.532 "method": "iobuf_set_options", 00:15:54.532 "params": { 00:15:54.532 "small_pool_count": 8192, 00:15:54.532 "large_pool_count": 1024, 00:15:54.532 "small_bufsize": 8192, 00:15:54.532 "large_bufsize": 135168, 00:15:54.532 "enable_numa": false 00:15:54.532 } 00:15:54.532 } 00:15:54.532 ] 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "subsystem": "sock", 00:15:54.532 "config": [ 00:15:54.532 { 00:15:54.532 "method": "sock_set_default_impl", 00:15:54.532 "params": { 00:15:54.532 "impl_name": "posix" 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "sock_impl_set_options", 00:15:54.532 "params": { 00:15:54.532 "impl_name": "ssl", 00:15:54.532 "recv_buf_size": 4096, 00:15:54.532 "send_buf_size": 4096, 00:15:54.532 "enable_recv_pipe": true, 00:15:54.532 "enable_quickack": false, 00:15:54.532 "enable_placement_id": 0, 00:15:54.532 "enable_zerocopy_send_server": true, 00:15:54.532 "enable_zerocopy_send_client": false, 00:15:54.532 "zerocopy_threshold": 0, 00:15:54.532 "tls_version": 0, 00:15:54.532 "enable_ktls": false 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "sock_impl_set_options", 00:15:54.532 "params": { 00:15:54.532 "impl_name": "posix", 00:15:54.532 "recv_buf_size": 2097152, 00:15:54.532 "send_buf_size": 2097152, 00:15:54.532 "enable_recv_pipe": true, 00:15:54.532 "enable_quickack": false, 00:15:54.532 "enable_placement_id": 0, 00:15:54.532 "enable_zerocopy_send_server": true, 00:15:54.532 "enable_zerocopy_send_client": false, 00:15:54.532 "zerocopy_threshold": 0, 00:15:54.532 "tls_version": 0, 00:15:54.532 "enable_ktls": false 00:15:54.532 } 00:15:54.532 } 00:15:54.532 ] 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "subsystem": "vmd", 00:15:54.532 "config": [] 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "subsystem": "accel", 00:15:54.532 "config": [ 00:15:54.532 { 00:15:54.532 "method": "accel_set_options", 00:15:54.532 "params": { 00:15:54.532 "small_cache_size": 128, 00:15:54.532 "large_cache_size": 16, 00:15:54.532 "task_count": 2048, 00:15:54.532 "sequence_count": 2048, 00:15:54.532 "buf_count": 2048 00:15:54.532 } 00:15:54.532 } 00:15:54.532 ] 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "subsystem": "bdev", 00:15:54.532 "config": [ 00:15:54.532 { 00:15:54.532 "method": "bdev_set_options", 00:15:54.532 "params": { 00:15:54.532 "bdev_io_pool_size": 65535, 00:15:54.532 "bdev_io_cache_size": 256, 00:15:54.532 "bdev_auto_examine": true, 00:15:54.532 "iobuf_small_cache_size": 128, 00:15:54.532 "iobuf_large_cache_size": 16 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "bdev_raid_set_options", 00:15:54.532 "params": { 00:15:54.532 "process_window_size_kb": 1024, 00:15:54.532 "process_max_bandwidth_mb_sec": 0 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "bdev_iscsi_set_options", 00:15:54.532 "params": { 00:15:54.532 "timeout_sec": 30 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "bdev_nvme_set_options", 00:15:54.532 "params": { 00:15:54.532 "action_on_timeout": "none", 00:15:54.532 "timeout_us": 0, 00:15:54.532 "timeout_admin_us": 0, 00:15:54.532 "keep_alive_timeout_ms": 10000, 00:15:54.532 "arbitration_burst": 0, 00:15:54.532 "low_priority_weight": 0, 00:15:54.532 "medium_priority_weight": 0, 00:15:54.532 "high_priority_weight": 0, 00:15:54.532 "nvme_adminq_poll_period_us": 10000, 00:15:54.532 "nvme_ioq_poll_period_us": 0, 00:15:54.532 "io_queue_requests": 0, 00:15:54.532 "delay_cmd_submit": true, 00:15:54.532 "transport_retry_count": 4, 00:15:54.532 "bdev_retry_count": 3, 00:15:54.532 "transport_ack_timeout": 0, 00:15:54.532 "ctrlr_loss_timeout_sec": 0, 00:15:54.532 "reconnect_delay_sec": 0, 00:15:54.532 "fast_io_fail_timeout_sec": 0, 00:15:54.532 "disable_auto_failback": false, 00:15:54.532 "generate_uuids": false, 00:15:54.532 "transport_tos": 0, 00:15:54.532 "nvme_error_stat": false, 00:15:54.532 "rdma_srq_size": 0, 00:15:54.532 "io_path_stat": false, 00:15:54.532 "allow_accel_sequence": false, 00:15:54.532 "rdma_max_cq_size": 0, 00:15:54.532 "rdma_cm_event_timeout_ms": 0, 00:15:54.532 "dhchap_digests": [ 00:15:54.532 "sha256", 00:15:54.532 "sha384", 00:15:54.532 "sha512" 00:15:54.532 ], 00:15:54.532 "dhchap_dhgroups": [ 00:15:54.532 "null", 00:15:54.532 "ffdhe2048", 00:15:54.532 "ffdhe3072", 00:15:54.532 "ffdhe4096", 00:15:54.532 "ffdhe6144", 00:15:54.532 "ffdhe8192" 00:15:54.532 ] 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "bdev_nvme_set_hotplug", 00:15:54.532 "params": { 00:15:54.532 "period_us": 100000, 00:15:54.532 "enable": false 00:15:54.532 } 00:15:54.532 }, 00:15:54.532 { 00:15:54.532 "method": "bdev_malloc_create", 00:15:54.532 "params": { 00:15:54.532 "name": "malloc0", 00:15:54.532 "num_blocks": 8192, 00:15:54.532 "block_size": 4096, 00:15:54.532 "physical_block_size": 4096, 00:15:54.532 "uuid": "2800ede6-1054-4667-94a0-8dc813b5c9ab", 00:15:54.532 "optimal_io_boundary": 0, 00:15:54.532 "md_size": 0, 00:15:54.532 "dif_type": 0, 00:15:54.532 "dif_is_head_of_md": false, 00:15:54.532 "dif_pi_format": 0 00:15:54.533 } 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "method": "bdev_wait_for_examine" 00:15:54.533 } 00:15:54.533 ] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "scsi", 00:15:54.533 "config": null 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "scheduler", 00:15:54.533 "config": [ 00:15:54.533 { 00:15:54.533 "method": "framework_set_scheduler", 00:15:54.533 "params": { 00:15:54.533 "name": "static" 00:15:54.533 } 00:15:54.533 } 00:15:54.533 ] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "vhost_scsi", 00:15:54.533 "config": [] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "vhost_blk", 00:15:54.533 "config": [] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "ublk", 00:15:54.533 "config": [ 00:15:54.533 { 00:15:54.533 "method": "ublk_create_target", 00:15:54.533 "params": { 00:15:54.533 "cpumask": "1" 00:15:54.533 } 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "method": "ublk_start_disk", 00:15:54.533 "params": { 00:15:54.533 "bdev_name": "malloc0", 00:15:54.533 "ublk_id": 0, 00:15:54.533 "num_queues": 1, 00:15:54.533 "queue_depth": 128 00:15:54.533 } 00:15:54.533 } 00:15:54.533 ] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "nbd", 00:15:54.533 "config": [] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "nvmf", 00:15:54.533 "config": [ 00:15:54.533 { 00:15:54.533 "method": "nvmf_set_config", 00:15:54.533 "params": { 00:15:54.533 "discovery_filter": "match_any", 00:15:54.533 "admin_cmd_passthru": { 00:15:54.533 "identify_ctrlr": false 00:15:54.533 }, 00:15:54.533 "dhchap_digests": [ 00:15:54.533 "sha256", 00:15:54.533 "sha384", 00:15:54.533 "sha512" 00:15:54.533 ], 00:15:54.533 "dhchap_dhgroups": [ 00:15:54.533 "null", 00:15:54.533 "ffdhe2048", 00:15:54.533 "ffdhe3072", 00:15:54.533 "ffdhe4096", 00:15:54.533 "ffdhe6144", 00:15:54.533 "ffdhe8192" 00:15:54.533 ] 00:15:54.533 } 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "method": "nvmf_set_max_subsystems", 00:15:54.533 "params": { 00:15:54.533 "max_subsystems": 1024 00:15:54.533 } 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "method": "nvmf_set_crdt", 00:15:54.533 "params": { 00:15:54.533 "crdt1": 0, 00:15:54.533 "crdt2": 0, 00:15:54.533 "crdt3": 0 00:15:54.533 } 00:15:54.533 } 00:15:54.533 ] 00:15:54.533 }, 00:15:54.533 { 00:15:54.533 "subsystem": "iscsi", 00:15:54.533 "config": [ 00:15:54.533 { 00:15:54.533 "method": "iscsi_set_options", 00:15:54.533 "params": { 00:15:54.533 "node_base": "iqn.2016-06.io.spdk", 00:15:54.533 "max_sessions": 128, 00:15:54.533 "max_connections_per_session": 2, 00:15:54.533 "max_queue_depth": 64, 00:15:54.533 "default_time2wait": 2, 00:15:54.533 "default_time2retain": 20, 00:15:54.533 "first_burst_length": 8192, 00:15:54.533 "immediate_data": true, 00:15:54.533 "allow_duplicated_isid": false, 00:15:54.533 "error_recovery_level": 0, 00:15:54.533 "nop_timeout": 60, 00:15:54.533 "nop_in_interval": 30, 00:15:54.533 "disable_chap": false, 00:15:54.533 "require_chap": false, 00:15:54.533 "mutual_chap": false, 00:15:54.533 "chap_group": 0, 00:15:54.533 "max_large_datain_per_connection": 64, 00:15:54.533 "max_r2t_per_connection": 4, 00:15:54.533 "pdu_pool_size": 36864, 00:15:54.533 "immediate_data_pool_size": 16384, 00:15:54.533 "data_out_pool_size": 2048 00:15:54.533 } 00:15:54.533 } 00:15:54.533 ] 00:15:54.533 } 00:15:54.533 ] 00:15:54.533 }' 00:15:54.533 08:37:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83499 00:15:54.533 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 83499 ']' 00:15:54.533 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 83499 00:15:54.533 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:54.533 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.533 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83499 00:15:54.793 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.793 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.793 killing process with pid 83499 00:15:54.793 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83499' 00:15:54.793 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 83499 00:15:54.793 08:37:16 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 83499 00:15:55.053 [2024-11-19 08:37:16.736825] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:55.053 [2024-11-19 08:37:16.781825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:55.053 [2024-11-19 08:37:16.781953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:55.053 [2024-11-19 08:37:16.784146] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:55.053 [2024-11-19 08:37:16.784208] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:55.053 [2024-11-19 08:37:16.784237] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:55.053 [2024-11-19 08:37:16.784277] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:55.053 [2024-11-19 08:37:16.784407] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83537 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83537 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 83537 ']' 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:55.314 08:37:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:55.314 "subsystems": [ 00:15:55.314 { 00:15:55.314 "subsystem": "fsdev", 00:15:55.314 "config": [ 00:15:55.314 { 00:15:55.314 "method": "fsdev_set_opts", 00:15:55.314 "params": { 00:15:55.314 "fsdev_io_pool_size": 65535, 00:15:55.314 "fsdev_io_cache_size": 256 00:15:55.314 } 00:15:55.314 } 00:15:55.314 ] 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "subsystem": "keyring", 00:15:55.314 "config": [] 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "subsystem": "iobuf", 00:15:55.314 "config": [ 00:15:55.314 { 00:15:55.314 "method": "iobuf_set_options", 00:15:55.314 "params": { 00:15:55.314 "small_pool_count": 8192, 00:15:55.314 "large_pool_count": 1024, 00:15:55.314 "small_bufsize": 8192, 00:15:55.314 "large_bufsize": 135168, 00:15:55.314 "enable_numa": false 00:15:55.314 } 00:15:55.314 } 00:15:55.314 ] 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "subsystem": "sock", 00:15:55.314 "config": [ 00:15:55.314 { 00:15:55.314 "method": "sock_set_default_impl", 00:15:55.314 "params": { 00:15:55.314 "impl_name": "posix" 00:15:55.314 } 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "method": "sock_impl_set_options", 00:15:55.314 "params": { 00:15:55.314 "impl_name": "ssl", 00:15:55.314 "recv_buf_size": 4096, 00:15:55.314 "send_buf_size": 4096, 00:15:55.314 "enable_recv_pipe": true, 00:15:55.314 "enable_quickack": false, 00:15:55.314 "enable_placement_id": 0, 00:15:55.314 "enable_zerocopy_send_server": true, 00:15:55.314 "enable_zerocopy_send_client": false, 00:15:55.314 "zerocopy_threshold": 0, 00:15:55.314 "tls_version": 0, 00:15:55.314 "enable_ktls": false 00:15:55.314 } 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "method": "sock_impl_set_options", 00:15:55.314 "params": { 00:15:55.314 "impl_name": "posix", 00:15:55.314 "recv_buf_size": 2097152, 00:15:55.314 "send_buf_size": 2097152, 00:15:55.314 "enable_recv_pipe": true, 00:15:55.314 "enable_quickack": false, 00:15:55.314 "enable_placement_id": 0, 00:15:55.314 "enable_zerocopy_send_server": true, 00:15:55.314 "enable_zerocopy_send_client": false, 00:15:55.314 "zerocopy_threshold": 0, 00:15:55.314 "tls_version": 0, 00:15:55.314 "enable_ktls": false 00:15:55.314 } 00:15:55.314 } 00:15:55.314 ] 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "subsystem": "vmd", 00:15:55.314 "config": [] 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "subsystem": "accel", 00:15:55.314 "config": [ 00:15:55.314 { 00:15:55.314 "method": "accel_set_options", 00:15:55.314 "params": { 00:15:55.314 "small_cache_size": 128, 00:15:55.314 "large_cache_size": 16, 00:15:55.314 "task_count": 2048, 00:15:55.314 "sequence_count": 2048, 00:15:55.314 "buf_count": 2048 00:15:55.314 } 00:15:55.314 } 00:15:55.314 ] 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "subsystem": "bdev", 00:15:55.314 "config": [ 00:15:55.314 { 00:15:55.314 "method": "bdev_set_options", 00:15:55.314 "params": { 00:15:55.314 "bdev_io_pool_size": 65535, 00:15:55.314 "bdev_io_cache_size": 256, 00:15:55.314 "bdev_auto_examine": true, 00:15:55.314 "iobuf_small_cache_size": 128, 00:15:55.314 "iobuf_large_cache_size": 16 00:15:55.314 } 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "method": "bdev_raid_set_options", 00:15:55.314 "params": { 00:15:55.314 "process_window_size_kb": 1024, 00:15:55.314 "process_max_bandwidth_mb_sec": 0 00:15:55.314 } 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "method": "bdev_iscsi_set_options", 00:15:55.314 "params": { 00:15:55.314 "timeout_sec": 30 00:15:55.314 } 00:15:55.314 }, 00:15:55.314 { 00:15:55.314 "method": "bdev_nvme_set_options", 00:15:55.314 "params": { 00:15:55.314 "action_on_timeout": "none", 00:15:55.314 "timeout_us": 0, 00:15:55.314 "timeout_admin_us": 0, 00:15:55.314 "keep_alive_timeout_ms": 10000, 00:15:55.314 "arbitration_burst": 0, 00:15:55.314 "low_priority_weight": 0, 00:15:55.314 "medium_priority_weight": 0, 00:15:55.314 "high_priority_weight": 0, 00:15:55.314 "nvme_adminq_poll_period_us": 10000, 00:15:55.314 "nvme_ioq_poll_period_us": 0, 00:15:55.314 "io_queue_requests": 0, 00:15:55.314 "delay_cmd_submit": true, 00:15:55.314 "transport_retry_count": 4, 00:15:55.314 "bdev_retry_count": 3, 00:15:55.314 "transport_ack_timeout": 0, 00:15:55.314 "ctrlr_loss_timeout_sec": 0, 00:15:55.314 "reconnect_delay_sec": 0, 00:15:55.314 "fast_io_fail_timeout_sec": 0, 00:15:55.314 "disable_auto_failback": false, 00:15:55.314 "generate_uuids": false, 00:15:55.314 "transport_tos": 0, 00:15:55.315 "nvme_error_stat": false, 00:15:55.315 "rdma_srq_size": 0, 00:15:55.315 "io_path_stat": false, 00:15:55.315 "allow_accel_sequence": false, 00:15:55.315 "rdma_max_cq_size": 0, 00:15:55.315 "rdma_cm_event_timeout_ms": 0, 00:15:55.315 "dhchap_digests": [ 00:15:55.315 "sha256", 00:15:55.315 "sha384", 00:15:55.315 "sha512" 00:15:55.315 ], 00:15:55.315 "dhchap_dhgroups": [ 00:15:55.315 "null", 00:15:55.315 "ffdhe2048", 00:15:55.315 "ffdhe3072", 00:15:55.315 "ffdhe4096", 00:15:55.315 "ffdhe6144", 00:15:55.315 "ffdhe8192" 00:15:55.315 ] 00:15:55.315 } 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "method": "bdev_nvme_set_hotplug", 00:15:55.315 "params": { 00:15:55.315 "period_us": 100000, 00:15:55.315 "enable": false 00:15:55.315 } 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "method": "bdev_malloc_create", 00:15:55.315 "params": { 00:15:55.315 "name": "malloc0", 00:15:55.315 "num_blocks": 8192, 00:15:55.315 "block_size": 4096, 00:15:55.315 "physical_block_size": 4096, 00:15:55.315 "uuid": "2800ede6-1054-4667-94a0-8dc813b5c9ab", 00:15:55.315 "optimal_io_boundary": 0, 00:15:55.315 "md_size": 0, 00:15:55.315 "dif_type": 0, 00:15:55.315 "dif_is_head_of_md": false, 00:15:55.315 "dif_pi_format": 0 00:15:55.315 } 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "method": "bdev_wait_for_examine" 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "scsi", 00:15:55.315 "config": null 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "scheduler", 00:15:55.315 "config": [ 00:15:55.315 { 00:15:55.315 "method": "framework_set_scheduler", 00:15:55.315 "params": { 00:15:55.315 "name": "static" 00:15:55.315 } 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "vhost_scsi", 00:15:55.315 "config": [] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "vhost_blk", 00:15:55.315 "config": [] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "ublk", 00:15:55.315 "config": [ 00:15:55.315 { 00:15:55.315 "method": "ublk_create_target", 00:15:55.315 "params": { 00:15:55.315 "cpumask": "1" 00:15:55.315 } 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "method": "ublk_start_disk", 00:15:55.315 "params": { 00:15:55.315 "bdev_name": "malloc0", 00:15:55.315 "ublk_id": 0, 00:15:55.315 "num_queues": 1, 00:15:55.315 "queue_depth": 128 00:15:55.315 } 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "nbd", 00:15:55.315 "config": [] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "nvmf", 00:15:55.315 "config": [ 00:15:55.315 { 00:15:55.315 "method": "nvmf_set_config", 00:15:55.315 "params": { 00:15:55.315 "discovery_filter": "match_any", 00:15:55.315 "admin_cmd_passthru": { 00:15:55.315 "identify_ctrlr": false 00:15:55.315 }, 00:15:55.315 "dhchap_digests": [ 00:15:55.315 "sha256", 00:15:55.315 "sha384", 00:15:55.315 "sha512" 00:15:55.315 ], 00:15:55.315 "dhchap_dhgroups": [ 00:15:55.315 "null", 00:15:55.315 "ffdhe2048", 00:15:55.315 "ffdhe3072", 00:15:55.315 "ffdhe4096", 00:15:55.315 "ffdhe6144", 00:15:55.315 "ffdhe8192" 00:15:55.315 ] 00:15:55.315 } 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "method": "nvmf_set_max_subsystems", 00:15:55.315 "params": { 00:15:55.315 "max_subsystems": 1024 00:15:55.315 } 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "method": "nvmf_set_crdt", 00:15:55.315 "params": { 00:15:55.315 "crdt1": 0, 00:15:55.315 "crdt2": 0, 00:15:55.315 "crdt3": 0 00:15:55.315 } 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 }, 00:15:55.315 { 00:15:55.315 "subsystem": "iscsi", 00:15:55.315 "config": [ 00:15:55.315 { 00:15:55.315 "method": "iscsi_set_options", 00:15:55.315 "params": { 00:15:55.315 "node_base": "iqn.2016-06.io.spdk", 00:15:55.315 "max_sessions": 128, 00:15:55.315 "max_connections_per_session": 2, 00:15:55.315 "max_queue_depth": 64, 00:15:55.315 "default_time2wait": 2, 00:15:55.315 "default_time2retain": 20, 00:15:55.315 "first_burst_length": 8192, 00:15:55.315 "immediate_data": true, 00:15:55.315 "allow_duplicated_isid": false, 00:15:55.315 "error_recovery_level": 0, 00:15:55.315 "nop_timeout": 60, 00:15:55.315 "nop_in_interval": 30, 00:15:55.315 "disable_chap": false, 00:15:55.315 "require_chap": false, 00:15:55.315 "mutual_chap": false, 00:15:55.315 "chap_group": 0, 00:15:55.315 "max_large_datain_per_connection": 64, 00:15:55.315 "max_r2t_per_connection": 4, 00:15:55.315 "pdu_pool_size": 36864, 00:15:55.315 "immediate_data_pool_size": 16384, 00:15:55.315 "data_out_pool_size": 2048 00:15:55.315 } 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 } 00:15:55.315 ] 00:15:55.315 }' 00:15:55.575 [2024-11-19 08:37:17.305887] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:55.575 [2024-11-19 08:37:17.306014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83537 ] 00:15:55.575 [2024-11-19 08:37:17.461824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.836 [2024-11-19 08:37:17.488696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.096 [2024-11-19 08:37:17.847733] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.096 [2024-11-19 08:37:17.848049] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.096 [2024-11-19 08:37:17.854873] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:56.096 [2024-11-19 08:37:17.854938] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:56.096 [2024-11-19 08:37:17.854947] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:56.096 [2024-11-19 08:37:17.854956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:56.096 [2024-11-19 08:37:17.862292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:56.096 [2024-11-19 08:37:17.862316] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:56.096 [2024-11-19 08:37:17.869735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:56.096 [2024-11-19 08:37:17.869818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:56.096 [2024-11-19 08:37:17.890351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83537 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 83537 ']' 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 83537 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83537 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.356 killing process with pid 83537 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83537' 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 83537 00:15:56.356 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 83537 00:15:56.616 [2024-11-19 08:37:18.505906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:56.876 [2024-11-19 08:37:18.536813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:56.876 [2024-11-19 08:37:18.536938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:56.876 [2024-11-19 08:37:18.541936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:56.876 [2024-11-19 08:37:18.541990] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:56.877 [2024-11-19 08:37:18.542004] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:56.877 [2024-11-19 08:37:18.542032] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:56.877 [2024-11-19 08:37:18.542167] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:57.136 08:37:18 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:57.136 00:15:57.136 real 0m3.810s 00:15:57.136 user 0m2.637s 00:15:57.136 sys 0m1.897s 00:15:57.136 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.136 08:37:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:57.136 ************************************ 00:15:57.136 END TEST test_save_ublk_config 00:15:57.136 ************************************ 00:15:57.136 08:37:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83593 00:15:57.136 08:37:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:57.136 08:37:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:57.136 08:37:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83593 00:15:57.136 08:37:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 83593 ']' 00:15:57.136 08:37:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.136 08:37:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.136 08:37:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.136 08:37:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.136 08:37:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.409 [2024-11-19 08:37:19.127506] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:57.409 [2024-11-19 08:37:19.127633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83593 ] 00:15:57.409 [2024-11-19 08:37:19.286103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:57.669 [2024-11-19 08:37:19.315354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.669 [2024-11-19 08:37:19.315461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.238 08:37:19 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.238 08:37:19 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:58.238 08:37:19 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:58.238 08:37:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.238 08:37:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.238 08:37:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:58.238 ************************************ 00:15:58.238 START TEST test_create_ublk 00:15:58.238 ************************************ 00:15:58.238 08:37:19 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:58.238 08:37:19 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:58.238 08:37:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.238 08:37:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:58.238 [2024-11-19 08:37:19.998747] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:58.238 [2024-11-19 08:37:20.000143] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:58.238 08:37:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.238 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:58.238 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:58.238 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.238 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:58.238 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.238 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:58.238 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:58.239 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.239 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:58.239 [2024-11-19 08:37:20.082877] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:58.239 [2024-11-19 08:37:20.083282] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:58.239 [2024-11-19 08:37:20.083301] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:58.239 [2024-11-19 08:37:20.083310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:58.239 [2024-11-19 08:37:20.089764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:58.239 [2024-11-19 08:37:20.089810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:58.239 [2024-11-19 08:37:20.097761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:58.239 [2024-11-19 08:37:20.098351] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:58.239 [2024-11-19 08:37:20.122754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:58.239 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.239 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:58.239 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:58.239 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:58.239 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.239 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:58.498 08:37:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:58.498 { 00:15:58.498 "ublk_device": "/dev/ublkb0", 00:15:58.498 "id": 0, 00:15:58.498 "queue_depth": 512, 00:15:58.498 "num_queues": 4, 00:15:58.498 "bdev_name": "Malloc0" 00:15:58.498 } 00:15:58.498 ]' 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:58.498 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:58.758 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:58.758 08:37:20 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:58.758 08:37:20 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:58.758 fio: verification read phase will never start because write phase uses all of runtime 00:15:58.758 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:58.758 fio-3.35 00:15:58.758 Starting 1 process 00:16:10.990 00:16:10.990 fio_test: (groupid=0, jobs=1): err= 0: pid=83632: Tue Nov 19 08:37:30 2024 00:16:10.990 write: IOPS=15.1k, BW=58.9MiB/s (61.8MB/s)(589MiB/10001msec); 0 zone resets 00:16:10.990 clat (usec): min=38, max=4082, avg=65.47, stdev=106.29 00:16:10.990 lat (usec): min=38, max=4112, avg=65.91, stdev=106.30 00:16:10.990 clat percentiles (usec): 00:16:10.990 | 1.00th=[ 43], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 57], 00:16:10.990 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:16:10.990 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 71], 95.00th=[ 76], 00:16:10.990 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 2212], 99.95th=[ 2900], 00:16:10.990 | 99.99th=[ 3589] 00:16:10.990 bw ( KiB/s): min=55640, max=68776, per=100.00%, avg=60362.89, stdev=3612.74, samples=19 00:16:10.990 iops : min=13910, max=17194, avg=15090.68, stdev=903.19, samples=19 00:16:10.990 lat (usec) : 50=6.85%, 100=92.78%, 250=0.13%, 500=0.01%, 750=0.01% 00:16:10.990 lat (usec) : 1000=0.02% 00:16:10.990 lat (msec) : 2=0.09%, 4=0.11%, 10=0.01% 00:16:10.990 cpu : usr=1.62%, sys=8.35%, ctx=150848, majf=0, minf=795 00:16:10.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.990 issued rwts: total=0,150841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.990 00:16:10.990 Run status group 0 (all jobs): 00:16:10.990 WRITE: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=589MiB (618MB), run=10001-10001msec 00:16:10.990 00:16:10.990 Disk stats (read/write): 00:16:10.990 ublkb0: ios=0/149178, merge=0/0, ticks=0/8808, in_queue=8808, util=99.12% 00:16:10.990 08:37:30 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:10.990 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.990 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.990 [2024-11-19 08:37:30.686369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.990 [2024-11-19 08:37:30.728776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.990 [2024-11-19 08:37:30.729473] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.990 [2024-11-19 08:37:30.737860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.990 [2024-11-19 08:37:30.738192] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:10.990 [2024-11-19 08:37:30.738218] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:10.990 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.990 08:37:30 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:10.990 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:10.990 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 [2024-11-19 08:37:30.753858] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:10.991 request: 00:16:10.991 { 00:16:10.991 "ublk_id": 0, 00:16:10.991 "method": "ublk_stop_disk", 00:16:10.991 "req_id": 1 00:16:10.991 } 00:16:10.991 Got JSON-RPC error response 00:16:10.991 response: 00:16:10.991 { 00:16:10.991 "code": -19, 00:16:10.991 "message": "No such device" 00:16:10.991 } 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.991 08:37:30 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 [2024-11-19 08:37:30.769827] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:10.991 [2024-11-19 08:37:30.771603] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:10.991 [2024-11-19 08:37:30.771645] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:30 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:30 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:10.991 08:37:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:10.991 00:16:10.991 real 0m11.006s 00:16:10.991 user 0m0.595s 00:16:10.991 sys 0m0.961s 00:16:10.991 ************************************ 00:16:10.991 END TEST test_create_ublk 00:16:10.991 ************************************ 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.991 08:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:31 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:10.991 08:37:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.991 08:37:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.991 08:37:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 ************************************ 00:16:10.991 START TEST test_create_multi_ublk 00:16:10.991 ************************************ 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 [2024-11-19 08:37:31.064754] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:10.991 [2024-11-19 08:37:31.065937] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 [2024-11-19 08:37:31.180896] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:10.991 [2024-11-19 08:37:31.181265] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:10.991 [2024-11-19 08:37:31.181283] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:10.991 [2024-11-19 08:37:31.181289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.991 [2024-11-19 08:37:31.192758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.991 [2024-11-19 08:37:31.192778] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.991 [2024-11-19 08:37:31.204766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.991 [2024-11-19 08:37:31.205283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:10.991 [2024-11-19 08:37:31.231756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 [2024-11-19 08:37:31.352878] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:10.991 [2024-11-19 08:37:31.353249] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:10.991 [2024-11-19 08:37:31.353266] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:10.991 [2024-11-19 08:37:31.353274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.991 [2024-11-19 08:37:31.364764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.991 [2024-11-19 08:37:31.364789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.991 [2024-11-19 08:37:31.376751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.991 [2024-11-19 08:37:31.377255] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:10.991 [2024-11-19 08:37:31.401762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.991 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 [2024-11-19 08:37:31.520877] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:10.991 [2024-11-19 08:37:31.521239] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:10.991 [2024-11-19 08:37:31.521257] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:10.991 [2024-11-19 08:37:31.521263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.991 [2024-11-19 08:37:31.532766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.991 [2024-11-19 08:37:31.532789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.991 [2024-11-19 08:37:31.544757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.991 [2024-11-19 08:37:31.545289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:10.991 [2024-11-19 08:37:31.548225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.992 [2024-11-19 08:37:31.656873] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:10.992 [2024-11-19 08:37:31.657262] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:10.992 [2024-11-19 08:37:31.657279] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:10.992 [2024-11-19 08:37:31.657288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.992 [2024-11-19 08:37:31.675763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.992 [2024-11-19 08:37:31.675791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.992 [2024-11-19 08:37:31.690742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.992 [2024-11-19 08:37:31.691241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:10.992 [2024-11-19 08:37:31.714766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:10.992 { 00:16:10.992 "ublk_device": "/dev/ublkb0", 00:16:10.992 "id": 0, 00:16:10.992 "queue_depth": 512, 00:16:10.992 "num_queues": 4, 00:16:10.992 "bdev_name": "Malloc0" 00:16:10.992 }, 00:16:10.992 { 00:16:10.992 "ublk_device": "/dev/ublkb1", 00:16:10.992 "id": 1, 00:16:10.992 "queue_depth": 512, 00:16:10.992 "num_queues": 4, 00:16:10.992 "bdev_name": "Malloc1" 00:16:10.992 }, 00:16:10.992 { 00:16:10.992 "ublk_device": "/dev/ublkb2", 00:16:10.992 "id": 2, 00:16:10.992 "queue_depth": 512, 00:16:10.992 "num_queues": 4, 00:16:10.992 "bdev_name": "Malloc2" 00:16:10.992 }, 00:16:10.992 { 00:16:10.992 "ublk_device": "/dev/ublkb3", 00:16:10.992 "id": 3, 00:16:10.992 "queue_depth": 512, 00:16:10.992 "num_queues": 4, 00:16:10.992 "bdev_name": "Malloc3" 00:16:10.992 } 00:16:10.992 ]' 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.992 [2024-11-19 08:37:32.587862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.992 [2024-11-19 08:37:32.619796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.992 [2024-11-19 08:37:32.620606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.992 [2024-11-19 08:37:32.627755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.992 [2024-11-19 08:37:32.628035] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:10.992 [2024-11-19 08:37:32.628050] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.992 [2024-11-19 08:37:32.642839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.992 [2024-11-19 08:37:32.676173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.992 [2024-11-19 08:37:32.677145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.992 [2024-11-19 08:37:32.682752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.992 [2024-11-19 08:37:32.683031] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:10.992 [2024-11-19 08:37:32.683046] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.992 [2024-11-19 08:37:32.705844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.992 [2024-11-19 08:37:32.739173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.992 [2024-11-19 08:37:32.740094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.992 [2024-11-19 08:37:32.745750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.992 [2024-11-19 08:37:32.746027] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:10.992 [2024-11-19 08:37:32.746043] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:10.992 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.993 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.993 [2024-11-19 08:37:32.760853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.993 [2024-11-19 08:37:32.796786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.993 [2024-11-19 08:37:32.797514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.993 [2024-11-19 08:37:32.807738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.993 [2024-11-19 08:37:32.808034] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:10.993 [2024-11-19 08:37:32.808049] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:10.993 08:37:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.993 08:37:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:11.253 [2024-11-19 08:37:32.989859] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:11.253 [2024-11-19 08:37:32.991151] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:11.253 [2024-11-19 08:37:32.991195] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:11.253 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.254 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:11.514 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:11.774 00:16:11.774 real 0m2.465s 00:16:11.774 user 0m1.002s 00:16:11.774 sys 0m0.213s 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.774 08:37:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.774 ************************************ 00:16:11.774 END TEST test_create_multi_ublk 00:16:11.774 ************************************ 00:16:11.774 08:37:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:11.774 08:37:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:11.774 08:37:33 ublk -- ublk/ublk.sh@130 -- # killprocess 83593 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@954 -- # '[' -z 83593 ']' 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@958 -- # kill -0 83593 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@959 -- # uname 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83593 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.774 killing process with pid 83593 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83593' 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@973 -- # kill 83593 00:16:11.774 08:37:33 ublk -- common/autotest_common.sh@978 -- # wait 83593 00:16:12.034 [2024-11-19 08:37:33.843935] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:12.034 [2024-11-19 08:37:33.844035] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:12.294 00:16:12.294 real 0m19.153s 00:16:12.294 user 0m28.752s 00:16:12.294 sys 0m8.894s 00:16:12.294 08:37:34 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.294 08:37:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.294 ************************************ 00:16:12.294 END TEST ublk 00:16:12.294 ************************************ 00:16:12.294 08:37:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:12.294 08:37:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.294 08:37:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.294 08:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:12.294 ************************************ 00:16:12.294 START TEST ublk_recovery 00:16:12.294 ************************************ 00:16:12.294 08:37:34 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:12.553 * Looking for test storage... 00:16:12.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:12.553 08:37:34 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:12.553 08:37:34 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.554 08:37:34 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:12.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.554 --rc genhtml_branch_coverage=1 00:16:12.554 --rc genhtml_function_coverage=1 00:16:12.554 --rc genhtml_legend=1 00:16:12.554 --rc geninfo_all_blocks=1 00:16:12.554 --rc geninfo_unexecuted_blocks=1 00:16:12.554 00:16:12.554 ' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:12.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.554 --rc genhtml_branch_coverage=1 00:16:12.554 --rc genhtml_function_coverage=1 00:16:12.554 --rc genhtml_legend=1 00:16:12.554 --rc geninfo_all_blocks=1 00:16:12.554 --rc geninfo_unexecuted_blocks=1 00:16:12.554 00:16:12.554 ' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:12.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.554 --rc genhtml_branch_coverage=1 00:16:12.554 --rc genhtml_function_coverage=1 00:16:12.554 --rc genhtml_legend=1 00:16:12.554 --rc geninfo_all_blocks=1 00:16:12.554 --rc geninfo_unexecuted_blocks=1 00:16:12.554 00:16:12.554 ' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:12.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.554 --rc genhtml_branch_coverage=1 00:16:12.554 --rc genhtml_function_coverage=1 00:16:12.554 --rc genhtml_legend=1 00:16:12.554 --rc geninfo_all_blocks=1 00:16:12.554 --rc geninfo_unexecuted_blocks=1 00:16:12.554 00:16:12.554 ' 00:16:12.554 08:37:34 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:12.554 08:37:34 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:12.554 08:37:34 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:12.554 08:37:34 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83966 00:16:12.554 08:37:34 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:12.554 08:37:34 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.554 08:37:34 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83966 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 83966 ']' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.554 08:37:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.814 [2024-11-19 08:37:34.473757] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:12.814 [2024-11-19 08:37:34.473914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83966 ] 00:16:12.814 [2024-11-19 08:37:34.627627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:12.814 [2024-11-19 08:37:34.653623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.814 [2024-11-19 08:37:34.653768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:13.410 08:37:35 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.410 [2024-11-19 08:37:35.273737] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:13.410 [2024-11-19 08:37:35.275106] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.410 08:37:35 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.410 malloc0 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.410 08:37:35 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.410 08:37:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.410 [2024-11-19 08:37:35.313872] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:13.410 [2024-11-19 08:37:35.313977] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:13.410 [2024-11-19 08:37:35.313987] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:13.410 [2024-11-19 08:37:35.314005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:13.669 [2024-11-19 08:37:35.321896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:13.669 [2024-11-19 08:37:35.321939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:13.669 [2024-11-19 08:37:35.329743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:13.669 [2024-11-19 08:37:35.329877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:13.669 [2024-11-19 08:37:35.352735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:13.669 1 00:16:13.670 08:37:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.670 08:37:35 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:14.609 08:37:36 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84003 00:16:14.609 08:37:36 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:14.609 08:37:36 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:14.609 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:14.609 fio-3.35 00:16:14.609 Starting 1 process 00:16:19.886 08:37:41 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83966 00:16:19.886 08:37:41 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:25.191 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83966 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:25.191 08:37:46 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84108 00:16:25.191 08:37:46 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:25.191 08:37:46 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.191 08:37:46 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84108 00:16:25.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.191 08:37:46 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 84108 ']' 00:16:25.191 08:37:46 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.191 08:37:46 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.191 08:37:46 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.191 08:37:46 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.191 08:37:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.191 [2024-11-19 08:37:46.468361] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:25.191 [2024-11-19 08:37:46.468497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84108 ] 00:16:25.191 [2024-11-19 08:37:46.625218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:25.191 [2024-11-19 08:37:46.651486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.191 [2024-11-19 08:37:46.651581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:25.451 08:37:47 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 [2024-11-19 08:37:47.281735] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.451 [2024-11-19 08:37:47.283101] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.451 08:37:47 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 malloc0 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.451 08:37:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 [2024-11-19 08:37:47.320922] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:25.451 [2024-11-19 08:37:47.320968] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:25.451 [2024-11-19 08:37:47.320975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:25.451 [2024-11-19 08:37:47.328769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:25.451 [2024-11-19 08:37:47.328793] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:25.451 [2024-11-19 08:37:47.328804] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:25.451 [2024-11-19 08:37:47.328880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:25.451 1 00:16:25.451 08:37:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.451 08:37:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84003 00:16:25.451 [2024-11-19 08:37:47.336775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:25.451 [2024-11-19 08:37:47.343212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:25.451 [2024-11-19 08:37:47.350942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:25.451 [2024-11-19 08:37:47.350966] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:21.756 00:17:21.756 fio_test: (groupid=0, jobs=1): err= 0: pid=84006: Tue Nov 19 08:38:36 2024 00:17:21.756 read: IOPS=21.7k, BW=84.8MiB/s (89.0MB/s)(5090MiB/60002msec) 00:17:21.756 slat (nsec): min=1351, max=257191, avg=7748.23, stdev=2640.69 00:17:21.756 clat (usec): min=777, max=5992.5k, avg=2898.39, stdev=42298.30 00:17:21.756 lat (usec): min=783, max=5992.5k, avg=2906.14, stdev=42298.30 00:17:21.756 clat percentiles (usec): 00:17:21.756 | 1.00th=[ 2089], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:17:21.756 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:17:21.756 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3163], 95.00th=[ 3982], 00:17:21.756 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 6980], 99.95th=[ 8029], 00:17:21.756 | 99.99th=[ 9241] 00:17:21.756 bw ( KiB/s): min=25592, max=100208, per=100.00%, avg=95780.36, stdev=9515.19, samples=108 00:17:21.756 iops : min= 6398, max=25052, avg=23945.08, stdev=2378.80, samples=108 00:17:21.756 write: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(5084MiB/60002msec); 0 zone resets 00:17:21.756 slat (nsec): min=1522, max=4116.5k, avg=7999.17, stdev=4952.09 00:17:21.756 clat (usec): min=740, max=5992.6k, avg=2981.08, stdev=41669.19 00:17:21.756 lat (usec): min=754, max=5992.6k, avg=2989.08, stdev=41669.18 00:17:21.756 clat percentiles (usec): 00:17:21.756 | 1.00th=[ 2073], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:17:21.756 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:17:21.756 | 70.00th=[ 2638], 80.00th=[ 2671], 90.00th=[ 3130], 95.00th=[ 3982], 00:17:21.756 | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 7177], 99.95th=[ 8160], 00:17:21.756 | 99.99th=[ 9372] 00:17:21.756 bw ( KiB/s): min=25600, max=100336, per=100.00%, avg=95675.31, stdev=9382.45, samples=108 00:17:21.756 iops : min= 6400, max=25084, avg=23918.78, stdev=2345.62, samples=108 00:17:21.756 lat (usec) : 750=0.01%, 1000=0.01% 00:17:21.756 lat (msec) : 2=0.46%, 4=94.65%, 10=4.89%, 20=0.01%, >=2000=0.01% 00:17:21.756 cpu : usr=9.54%, sys=34.54%, ctx=101946, majf=0, minf=13 00:17:21.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.756 issued rwts: total=1303078,1301497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.756 00:17:21.756 Run status group 0 (all jobs): 00:17:21.756 READ: bw=84.8MiB/s (89.0MB/s), 84.8MiB/s-84.8MiB/s (89.0MB/s-89.0MB/s), io=5090MiB (5337MB), run=60002-60002msec 00:17:21.756 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=5084MiB (5331MB), run=60002-60002msec 00:17:21.756 00:17:21.756 Disk stats (read/write): 00:17:21.756 ublkb1: ios=1300698/1299167, merge=0/0, ticks=3671512/3637129, in_queue=7308642, util=99.99% 00:17:21.756 08:38:36 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.756 [2024-11-19 08:38:36.642713] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:21.756 [2024-11-19 08:38:36.680843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:21.756 [2024-11-19 08:38:36.681079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:21.756 [2024-11-19 08:38:36.690757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:21.756 [2024-11-19 08:38:36.690917] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:21.756 [2024-11-19 08:38:36.690929] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.756 08:38:36 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.756 [2024-11-19 08:38:36.708834] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:21.756 [2024-11-19 08:38:36.710662] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:21.756 [2024-11-19 08:38:36.710707] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.756 08:38:36 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:21.756 08:38:36 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:21.756 08:38:36 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84108 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 84108 ']' 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 84108 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84108 00:17:21.756 killing process with pid 84108 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84108' 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@973 -- # kill 84108 00:17:21.756 08:38:36 ublk_recovery -- common/autotest_common.sh@978 -- # wait 84108 00:17:21.756 [2024-11-19 08:38:37.031134] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:21.756 [2024-11-19 08:38:37.031224] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:21.756 ************************************ 00:17:21.756 END TEST ublk_recovery 00:17:21.756 ************************************ 00:17:21.756 00:17:21.756 real 1m3.329s 00:17:21.756 user 1m40.234s 00:17:21.756 sys 0m42.240s 00:17:21.756 08:38:37 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.756 08:38:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.756 08:38:37 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:21.756 08:38:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:21.756 08:38:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.756 08:38:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.756 08:38:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:21.756 08:38:37 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:21.756 08:38:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.756 08:38:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.756 08:38:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.756 ************************************ 00:17:21.756 START TEST ftl 00:17:21.756 ************************************ 00:17:21.756 08:38:37 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:21.756 * Looking for test storage... 00:17:21.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.756 08:38:37 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.756 08:38:37 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.756 08:38:37 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.756 08:38:37 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.756 08:38:37 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.757 08:38:37 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.757 08:38:37 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.757 08:38:37 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.757 08:38:37 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.757 08:38:37 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.757 08:38:37 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:21.757 08:38:37 ftl -- scripts/common.sh@345 -- # : 1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.757 08:38:37 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.757 08:38:37 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@353 -- # local d=1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.757 08:38:37 ftl -- scripts/common.sh@355 -- # echo 1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.757 08:38:37 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@353 -- # local d=2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.757 08:38:37 ftl -- scripts/common.sh@355 -- # echo 2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.757 08:38:37 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.757 08:38:37 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.757 08:38:37 ftl -- scripts/common.sh@368 -- # return 0 00:17:21.757 08:38:37 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.757 08:38:37 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.757 --rc genhtml_branch_coverage=1 00:17:21.757 --rc genhtml_function_coverage=1 00:17:21.757 --rc genhtml_legend=1 00:17:21.757 --rc geninfo_all_blocks=1 00:17:21.757 --rc geninfo_unexecuted_blocks=1 00:17:21.757 00:17:21.757 ' 00:17:21.757 08:38:37 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.757 --rc genhtml_branch_coverage=1 00:17:21.757 --rc genhtml_function_coverage=1 00:17:21.757 --rc genhtml_legend=1 00:17:21.757 --rc geninfo_all_blocks=1 00:17:21.757 --rc geninfo_unexecuted_blocks=1 00:17:21.757 00:17:21.757 ' 00:17:21.757 08:38:37 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.757 --rc genhtml_branch_coverage=1 00:17:21.757 --rc genhtml_function_coverage=1 00:17:21.757 --rc genhtml_legend=1 00:17:21.757 --rc geninfo_all_blocks=1 00:17:21.757 --rc geninfo_unexecuted_blocks=1 00:17:21.757 00:17:21.757 ' 00:17:21.757 08:38:37 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.757 --rc genhtml_branch_coverage=1 00:17:21.757 --rc genhtml_function_coverage=1 00:17:21.757 --rc genhtml_legend=1 00:17:21.757 --rc geninfo_all_blocks=1 00:17:21.757 --rc geninfo_unexecuted_blocks=1 00:17:21.757 00:17:21.757 ' 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:21.757 08:38:37 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:21.757 08:38:37 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.757 08:38:37 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.757 08:38:37 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:21.757 08:38:37 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:21.757 08:38:37 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.757 08:38:37 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:21.757 08:38:37 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:21.757 08:38:37 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.757 08:38:37 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.757 08:38:37 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:21.757 08:38:37 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:21.757 08:38:37 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.757 08:38:37 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.757 08:38:37 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:21.757 08:38:37 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:21.757 08:38:37 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.757 08:38:37 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.757 08:38:37 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:21.757 08:38:37 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:21.757 08:38:37 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.757 08:38:37 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.757 08:38:37 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.757 08:38:37 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.757 08:38:37 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:21.757 08:38:37 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:21.757 08:38:37 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.757 08:38:37 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:21.757 08:38:37 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:21.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:21.757 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.757 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.757 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.757 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.757 08:38:38 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84905 00:17:21.757 08:38:38 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:21.757 08:38:38 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84905 00:17:21.757 08:38:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 84905 ']' 00:17:21.757 08:38:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.757 08:38:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.757 08:38:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.757 08:38:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.757 08:38:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:21.757 [2024-11-19 08:38:38.743803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:21.757 [2024-11-19 08:38:38.744026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84905 ] 00:17:21.757 [2024-11-19 08:38:38.900264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.757 [2024-11-19 08:38:38.925950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.757 08:38:39 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.757 08:38:39 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:21.757 08:38:39 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:21.757 08:38:39 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@50 -- # break 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:21.757 08:38:40 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:21.757 08:38:41 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:21.757 08:38:41 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:21.757 08:38:41 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:21.757 08:38:41 ftl -- ftl/ftl.sh@63 -- # break 00:17:21.757 08:38:41 ftl -- ftl/ftl.sh@66 -- # killprocess 84905 00:17:21.757 08:38:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 84905 ']' 00:17:21.757 08:38:41 ftl -- common/autotest_common.sh@958 -- # kill -0 84905 00:17:21.757 08:38:41 ftl -- common/autotest_common.sh@959 -- # uname 00:17:21.757 08:38:41 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.757 08:38:41 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84905 00:17:21.757 killing process with pid 84905 00:17:21.757 08:38:41 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84905' 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@973 -- # kill 84905 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@978 -- # wait 84905 00:17:21.758 08:38:41 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:21.758 08:38:41 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.758 08:38:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:21.758 ************************************ 00:17:21.758 START TEST ftl_fio_basic 00:17:21.758 ************************************ 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:21.758 * Looking for test storage... 00:17:21.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.758 --rc genhtml_branch_coverage=1 00:17:21.758 --rc genhtml_function_coverage=1 00:17:21.758 --rc genhtml_legend=1 00:17:21.758 --rc geninfo_all_blocks=1 00:17:21.758 --rc geninfo_unexecuted_blocks=1 00:17:21.758 00:17:21.758 ' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.758 --rc genhtml_branch_coverage=1 00:17:21.758 --rc genhtml_function_coverage=1 00:17:21.758 --rc genhtml_legend=1 00:17:21.758 --rc geninfo_all_blocks=1 00:17:21.758 --rc geninfo_unexecuted_blocks=1 00:17:21.758 00:17:21.758 ' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.758 --rc genhtml_branch_coverage=1 00:17:21.758 --rc genhtml_function_coverage=1 00:17:21.758 --rc genhtml_legend=1 00:17:21.758 --rc geninfo_all_blocks=1 00:17:21.758 --rc geninfo_unexecuted_blocks=1 00:17:21.758 00:17:21.758 ' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.758 --rc genhtml_branch_coverage=1 00:17:21.758 --rc genhtml_function_coverage=1 00:17:21.758 --rc genhtml_legend=1 00:17:21.758 --rc geninfo_all_blocks=1 00:17:21.758 --rc geninfo_unexecuted_blocks=1 00:17:21.758 00:17:21.758 ' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85026 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85026 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 85026 ']' 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.758 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.759 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.759 08:38:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:21.759 [2024-11-19 08:38:41.788056] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:21.759 [2024-11-19 08:38:41.788263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85026 ] 00:17:21.759 [2024-11-19 08:38:41.948157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.759 [2024-11-19 08:38:41.974832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.759 [2024-11-19 08:38:41.974877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.759 [2024-11-19 08:38:41.974965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:21.759 08:38:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:21.759 { 00:17:21.759 "name": "nvme0n1", 00:17:21.759 "aliases": [ 00:17:21.759 "284ce7f1-c8e5-4f97-9cf1-41909a223e4f" 00:17:21.759 ], 00:17:21.759 "product_name": "NVMe disk", 00:17:21.759 "block_size": 4096, 00:17:21.759 "num_blocks": 1310720, 00:17:21.759 "uuid": "284ce7f1-c8e5-4f97-9cf1-41909a223e4f", 00:17:21.759 "numa_id": -1, 00:17:21.759 "assigned_rate_limits": { 00:17:21.759 "rw_ios_per_sec": 0, 00:17:21.759 "rw_mbytes_per_sec": 0, 00:17:21.759 "r_mbytes_per_sec": 0, 00:17:21.759 "w_mbytes_per_sec": 0 00:17:21.759 }, 00:17:21.759 "claimed": false, 00:17:21.759 "zoned": false, 00:17:21.759 "supported_io_types": { 00:17:21.759 "read": true, 00:17:21.759 "write": true, 00:17:21.759 "unmap": true, 00:17:21.759 "flush": true, 00:17:21.759 "reset": true, 00:17:21.759 "nvme_admin": true, 00:17:21.759 "nvme_io": true, 00:17:21.759 "nvme_io_md": false, 00:17:21.759 "write_zeroes": true, 00:17:21.759 "zcopy": false, 00:17:21.759 "get_zone_info": false, 00:17:21.759 "zone_management": false, 00:17:21.759 "zone_append": false, 00:17:21.759 "compare": true, 00:17:21.759 "compare_and_write": false, 00:17:21.759 "abort": true, 00:17:21.759 "seek_hole": false, 00:17:21.759 "seek_data": false, 00:17:21.759 "copy": true, 00:17:21.759 "nvme_iov_md": false 00:17:21.759 }, 00:17:21.759 "driver_specific": { 00:17:21.759 "nvme": [ 00:17:21.759 { 00:17:21.759 "pci_address": "0000:00:11.0", 00:17:21.759 "trid": { 00:17:21.759 "trtype": "PCIe", 00:17:21.759 "traddr": "0000:00:11.0" 00:17:21.759 }, 00:17:21.759 "ctrlr_data": { 00:17:21.759 "cntlid": 0, 00:17:21.759 "vendor_id": "0x1b36", 00:17:21.759 "model_number": "QEMU NVMe Ctrl", 00:17:21.759 "serial_number": "12341", 00:17:21.759 "firmware_revision": "8.0.0", 00:17:21.759 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:21.759 "oacs": { 00:17:21.759 "security": 0, 00:17:21.759 "format": 1, 00:17:21.759 "firmware": 0, 00:17:21.759 "ns_manage": 1 00:17:21.759 }, 00:17:21.759 "multi_ctrlr": false, 00:17:21.759 "ana_reporting": false 00:17:21.759 }, 00:17:21.759 "vs": { 00:17:21.759 "nvme_version": "1.4" 00:17:21.759 }, 00:17:21.759 "ns_data": { 00:17:21.759 "id": 1, 00:17:21.759 "can_share": false 00:17:21.759 } 00:17:21.759 } 00:17:21.759 ], 00:17:21.759 "mp_policy": "active_passive" 00:17:21.759 } 00:17:21.759 } 00:17:21.759 ]' 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0c9d82c9-5054-428b-9434-90def728ef8e 00:17:21.759 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c9d82c9-5054-428b-9434-90def728ef8e 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:22.020 08:38:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:22.280 { 00:17:22.280 "name": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:22.280 "aliases": [ 00:17:22.280 "lvs/nvme0n1p0" 00:17:22.280 ], 00:17:22.280 "product_name": "Logical Volume", 00:17:22.280 "block_size": 4096, 00:17:22.280 "num_blocks": 26476544, 00:17:22.280 "uuid": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:22.280 "assigned_rate_limits": { 00:17:22.280 "rw_ios_per_sec": 0, 00:17:22.280 "rw_mbytes_per_sec": 0, 00:17:22.280 "r_mbytes_per_sec": 0, 00:17:22.280 "w_mbytes_per_sec": 0 00:17:22.280 }, 00:17:22.280 "claimed": false, 00:17:22.280 "zoned": false, 00:17:22.280 "supported_io_types": { 00:17:22.280 "read": true, 00:17:22.280 "write": true, 00:17:22.280 "unmap": true, 00:17:22.280 "flush": false, 00:17:22.280 "reset": true, 00:17:22.280 "nvme_admin": false, 00:17:22.280 "nvme_io": false, 00:17:22.280 "nvme_io_md": false, 00:17:22.280 "write_zeroes": true, 00:17:22.280 "zcopy": false, 00:17:22.280 "get_zone_info": false, 00:17:22.280 "zone_management": false, 00:17:22.280 "zone_append": false, 00:17:22.280 "compare": false, 00:17:22.280 "compare_and_write": false, 00:17:22.280 "abort": false, 00:17:22.280 "seek_hole": true, 00:17:22.280 "seek_data": true, 00:17:22.280 "copy": false, 00:17:22.280 "nvme_iov_md": false 00:17:22.280 }, 00:17:22.280 "driver_specific": { 00:17:22.280 "lvol": { 00:17:22.280 "lvol_store_uuid": "0c9d82c9-5054-428b-9434-90def728ef8e", 00:17:22.280 "base_bdev": "nvme0n1", 00:17:22.280 "thin_provision": true, 00:17:22.280 "num_allocated_clusters": 0, 00:17:22.280 "snapshot": false, 00:17:22.280 "clone": false, 00:17:22.280 "esnap_clone": false 00:17:22.280 } 00:17:22.280 } 00:17:22.280 } 00:17:22.280 ]' 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:22.280 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:22.540 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:22.800 { 00:17:22.800 "name": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:22.800 "aliases": [ 00:17:22.800 "lvs/nvme0n1p0" 00:17:22.800 ], 00:17:22.800 "product_name": "Logical Volume", 00:17:22.800 "block_size": 4096, 00:17:22.800 "num_blocks": 26476544, 00:17:22.800 "uuid": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:22.800 "assigned_rate_limits": { 00:17:22.800 "rw_ios_per_sec": 0, 00:17:22.800 "rw_mbytes_per_sec": 0, 00:17:22.800 "r_mbytes_per_sec": 0, 00:17:22.800 "w_mbytes_per_sec": 0 00:17:22.800 }, 00:17:22.800 "claimed": false, 00:17:22.800 "zoned": false, 00:17:22.800 "supported_io_types": { 00:17:22.800 "read": true, 00:17:22.800 "write": true, 00:17:22.800 "unmap": true, 00:17:22.800 "flush": false, 00:17:22.800 "reset": true, 00:17:22.800 "nvme_admin": false, 00:17:22.800 "nvme_io": false, 00:17:22.800 "nvme_io_md": false, 00:17:22.800 "write_zeroes": true, 00:17:22.800 "zcopy": false, 00:17:22.800 "get_zone_info": false, 00:17:22.800 "zone_management": false, 00:17:22.800 "zone_append": false, 00:17:22.800 "compare": false, 00:17:22.800 "compare_and_write": false, 00:17:22.800 "abort": false, 00:17:22.800 "seek_hole": true, 00:17:22.800 "seek_data": true, 00:17:22.800 "copy": false, 00:17:22.800 "nvme_iov_md": false 00:17:22.800 }, 00:17:22.800 "driver_specific": { 00:17:22.800 "lvol": { 00:17:22.800 "lvol_store_uuid": "0c9d82c9-5054-428b-9434-90def728ef8e", 00:17:22.800 "base_bdev": "nvme0n1", 00:17:22.800 "thin_provision": true, 00:17:22.800 "num_allocated_clusters": 0, 00:17:22.800 "snapshot": false, 00:17:22.800 "clone": false, 00:17:22.800 "esnap_clone": false 00:17:22.800 } 00:17:22.800 } 00:17:22.800 } 00:17:22.800 ]' 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:22.800 08:38:44 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:23.060 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:23.060 08:38:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 490c0ae4-6fdd-47bc-b033-75d1ab83b92a 00:17:23.319 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:23.319 { 00:17:23.319 "name": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:23.320 "aliases": [ 00:17:23.320 "lvs/nvme0n1p0" 00:17:23.320 ], 00:17:23.320 "product_name": "Logical Volume", 00:17:23.320 "block_size": 4096, 00:17:23.320 "num_blocks": 26476544, 00:17:23.320 "uuid": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:23.320 "assigned_rate_limits": { 00:17:23.320 "rw_ios_per_sec": 0, 00:17:23.320 "rw_mbytes_per_sec": 0, 00:17:23.320 "r_mbytes_per_sec": 0, 00:17:23.320 "w_mbytes_per_sec": 0 00:17:23.320 }, 00:17:23.320 "claimed": false, 00:17:23.320 "zoned": false, 00:17:23.320 "supported_io_types": { 00:17:23.320 "read": true, 00:17:23.320 "write": true, 00:17:23.320 "unmap": true, 00:17:23.320 "flush": false, 00:17:23.320 "reset": true, 00:17:23.320 "nvme_admin": false, 00:17:23.320 "nvme_io": false, 00:17:23.320 "nvme_io_md": false, 00:17:23.320 "write_zeroes": true, 00:17:23.320 "zcopy": false, 00:17:23.320 "get_zone_info": false, 00:17:23.320 "zone_management": false, 00:17:23.320 "zone_append": false, 00:17:23.320 "compare": false, 00:17:23.320 "compare_and_write": false, 00:17:23.320 "abort": false, 00:17:23.320 "seek_hole": true, 00:17:23.320 "seek_data": true, 00:17:23.320 "copy": false, 00:17:23.320 "nvme_iov_md": false 00:17:23.320 }, 00:17:23.320 "driver_specific": { 00:17:23.320 "lvol": { 00:17:23.320 "lvol_store_uuid": "0c9d82c9-5054-428b-9434-90def728ef8e", 00:17:23.320 "base_bdev": "nvme0n1", 00:17:23.320 "thin_provision": true, 00:17:23.320 "num_allocated_clusters": 0, 00:17:23.320 "snapshot": false, 00:17:23.320 "clone": false, 00:17:23.320 "esnap_clone": false 00:17:23.320 } 00:17:23.320 } 00:17:23.320 } 00:17:23.320 ]' 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:23.320 08:38:45 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 490c0ae4-6fdd-47bc-b033-75d1ab83b92a -c nvc0n1p0 --l2p_dram_limit 60 00:17:23.580 [2024-11-19 08:38:45.322841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.322893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.580 [2024-11-19 08:38:45.322906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:23.580 [2024-11-19 08:38:45.322932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.323035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.323049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.580 [2024-11-19 08:38:45.323057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:23.580 [2024-11-19 08:38:45.323070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.323119] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.580 [2024-11-19 08:38:45.323408] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.580 [2024-11-19 08:38:45.323430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.323458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.580 [2024-11-19 08:38:45.323478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:23.580 [2024-11-19 08:38:45.323490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.323614] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 95e01cfd-0408-4dec-9b3a-14a6a791ab14 00:17:23.580 [2024-11-19 08:38:45.325135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.325164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:23.580 [2024-11-19 08:38:45.325176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:23.580 [2024-11-19 08:38:45.325183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.332801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.332852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.580 [2024-11-19 08:38:45.332880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.496 ms 00:17:23.580 [2024-11-19 08:38:45.332890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.333012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.333034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.580 [2024-11-19 08:38:45.333066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:23.580 [2024-11-19 08:38:45.333074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.333187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.333208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.580 [2024-11-19 08:38:45.333218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:23.580 [2024-11-19 08:38:45.333225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.333277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.580 [2024-11-19 08:38:45.335009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.335043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.580 [2024-11-19 08:38:45.335064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:17:23.580 [2024-11-19 08:38:45.335085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.335169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.335180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.580 [2024-11-19 08:38:45.335188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.580 [2024-11-19 08:38:45.335198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.335255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:23.580 [2024-11-19 08:38:45.335394] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:23.580 [2024-11-19 08:38:45.335411] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.580 [2024-11-19 08:38:45.335423] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:23.580 [2024-11-19 08:38:45.335432] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.580 [2024-11-19 08:38:45.335442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.580 [2024-11-19 08:38:45.335452] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:23.580 [2024-11-19 08:38:45.335462] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.580 [2024-11-19 08:38:45.335470] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:23.580 [2024-11-19 08:38:45.335482] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:23.580 [2024-11-19 08:38:45.335492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.335513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.580 [2024-11-19 08:38:45.335521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:23.580 [2024-11-19 08:38:45.335530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.335635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.580 [2024-11-19 08:38:45.335653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.580 [2024-11-19 08:38:45.335663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:23.580 [2024-11-19 08:38:45.335673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.580 [2024-11-19 08:38:45.335823] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.580 [2024-11-19 08:38:45.335848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.580 [2024-11-19 08:38:45.335856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.580 [2024-11-19 08:38:45.335878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.580 [2024-11-19 08:38:45.335886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.580 [2024-11-19 08:38:45.335894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.580 [2024-11-19 08:38:45.335901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:23.580 [2024-11-19 08:38:45.335911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.580 [2024-11-19 08:38:45.335917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:23.580 [2024-11-19 08:38:45.335926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.580 [2024-11-19 08:38:45.335932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.580 [2024-11-19 08:38:45.335940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:23.580 [2024-11-19 08:38:45.335947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.580 [2024-11-19 08:38:45.335960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.580 [2024-11-19 08:38:45.335967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:23.580 [2024-11-19 08:38:45.335974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.580 [2024-11-19 08:38:45.335981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.580 [2024-11-19 08:38:45.335989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:23.580 [2024-11-19 08:38:45.335996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.580 [2024-11-19 08:38:45.336004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.580 [2024-11-19 08:38:45.336010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:23.580 [2024-11-19 08:38:45.336018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.580 [2024-11-19 08:38:45.336024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.580 [2024-11-19 08:38:45.336032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:23.580 [2024-11-19 08:38:45.336038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.580 [2024-11-19 08:38:45.336047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.580 [2024-11-19 08:38:45.336053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:23.581 [2024-11-19 08:38:45.336061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.581 [2024-11-19 08:38:45.336068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.581 [2024-11-19 08:38:45.336077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:23.581 [2024-11-19 08:38:45.336083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.581 [2024-11-19 08:38:45.336091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.581 [2024-11-19 08:38:45.336097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:23.581 [2024-11-19 08:38:45.336106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.581 [2024-11-19 08:38:45.336113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.581 [2024-11-19 08:38:45.336121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:23.581 [2024-11-19 08:38:45.336127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.581 [2024-11-19 08:38:45.336135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:23.581 [2024-11-19 08:38:45.336141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:23.581 [2024-11-19 08:38:45.336149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.581 [2024-11-19 08:38:45.336156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:23.581 [2024-11-19 08:38:45.336164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:23.581 [2024-11-19 08:38:45.336169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.581 [2024-11-19 08:38:45.336177] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.581 [2024-11-19 08:38:45.336184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.581 [2024-11-19 08:38:45.336195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.581 [2024-11-19 08:38:45.336214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.581 [2024-11-19 08:38:45.336225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.581 [2024-11-19 08:38:45.336232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.581 [2024-11-19 08:38:45.336240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.581 [2024-11-19 08:38:45.336247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.581 [2024-11-19 08:38:45.336255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.581 [2024-11-19 08:38:45.336261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.581 [2024-11-19 08:38:45.336278] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.581 [2024-11-19 08:38:45.336289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:23.581 [2024-11-19 08:38:45.336308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:23.581 [2024-11-19 08:38:45.336316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:23.581 [2024-11-19 08:38:45.336323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:23.581 [2024-11-19 08:38:45.336332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:23.581 [2024-11-19 08:38:45.336339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:23.581 [2024-11-19 08:38:45.336349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:23.581 [2024-11-19 08:38:45.336358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:23.581 [2024-11-19 08:38:45.336367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:23.581 [2024-11-19 08:38:45.336374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:23.581 [2024-11-19 08:38:45.336412] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.581 [2024-11-19 08:38:45.336422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.581 [2024-11-19 08:38:45.336438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.581 [2024-11-19 08:38:45.336446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.581 [2024-11-19 08:38:45.336453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.581 [2024-11-19 08:38:45.336462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.581 [2024-11-19 08:38:45.336470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.581 [2024-11-19 08:38:45.336497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:17:23.581 [2024-11-19 08:38:45.336505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.581 [2024-11-19 08:38:45.336658] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:23.581 [2024-11-19 08:38:45.336678] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:26.121 [2024-11-19 08:38:47.587902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.587998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:26.121 [2024-11-19 08:38:47.588018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2255.576 ms 00:17:26.121 [2024-11-19 08:38:47.588057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.599187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.599255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.121 [2024-11-19 08:38:47.599299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.024 ms 00:17:26.121 [2024-11-19 08:38:47.599310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.599435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.599444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.121 [2024-11-19 08:38:47.599454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:26.121 [2024-11-19 08:38:47.599462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.617589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.617644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.121 [2024-11-19 08:38:47.617669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.075 ms 00:17:26.121 [2024-11-19 08:38:47.617679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.617775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.617787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.121 [2024-11-19 08:38:47.617800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.121 [2024-11-19 08:38:47.617810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.618328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.618373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.121 [2024-11-19 08:38:47.618386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:17:26.121 [2024-11-19 08:38:47.618398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.618583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.618618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.121 [2024-11-19 08:38:47.618645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:26.121 [2024-11-19 08:38:47.618655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.626287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.626332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.121 [2024-11-19 08:38:47.626347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.586 ms 00:17:26.121 [2024-11-19 08:38:47.626357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.633790] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.121 [2024-11-19 08:38:47.650073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.650139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.121 [2024-11-19 08:38:47.650151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.595 ms 00:17:26.121 [2024-11-19 08:38:47.650195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.696010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.696087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:26.121 [2024-11-19 08:38:47.696101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.824 ms 00:17:26.121 [2024-11-19 08:38:47.696129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.696330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.696369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.121 [2024-11-19 08:38:47.696380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:17:26.121 [2024-11-19 08:38:47.696389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.699512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.699549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:26.121 [2024-11-19 08:38:47.699561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.075 ms 00:17:26.121 [2024-11-19 08:38:47.699571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.702295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.702333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:26.121 [2024-11-19 08:38:47.702342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:17:26.121 [2024-11-19 08:38:47.702351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.702634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.702655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.121 [2024-11-19 08:38:47.702664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:17:26.121 [2024-11-19 08:38:47.702693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.724224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.724305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:26.121 [2024-11-19 08:38:47.724318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.498 ms 00:17:26.121 [2024-11-19 08:38:47.724329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.728588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.728626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:26.121 [2024-11-19 08:38:47.728637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:17:26.121 [2024-11-19 08:38:47.728646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.731775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.731809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:26.121 [2024-11-19 08:38:47.731819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:17:26.121 [2024-11-19 08:38:47.731827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.735329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.735368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.121 [2024-11-19 08:38:47.735378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:17:26.121 [2024-11-19 08:38:47.735390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.735462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.735474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.121 [2024-11-19 08:38:47.735483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.121 [2024-11-19 08:38:47.735493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.735623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.121 [2024-11-19 08:38:47.735635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.121 [2024-11-19 08:38:47.735647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:26.121 [2024-11-19 08:38:47.735656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.121 [2024-11-19 08:38:47.736930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2418.223 ms, result 0 00:17:26.121 { 00:17:26.121 "name": "ftl0", 00:17:26.121 "uuid": "95e01cfd-0408-4dec-9b3a-14a6a791ab14" 00:17:26.121 } 00:17:26.121 08:38:47 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:26.121 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:26.121 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:26.121 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:26.121 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:26.122 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:26.122 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:26.122 08:38:47 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:26.425 [ 00:17:26.425 { 00:17:26.425 "name": "ftl0", 00:17:26.425 "aliases": [ 00:17:26.425 "95e01cfd-0408-4dec-9b3a-14a6a791ab14" 00:17:26.425 ], 00:17:26.425 "product_name": "FTL disk", 00:17:26.425 "block_size": 4096, 00:17:26.425 "num_blocks": 20971520, 00:17:26.425 "uuid": "95e01cfd-0408-4dec-9b3a-14a6a791ab14", 00:17:26.425 "assigned_rate_limits": { 00:17:26.425 "rw_ios_per_sec": 0, 00:17:26.425 "rw_mbytes_per_sec": 0, 00:17:26.425 "r_mbytes_per_sec": 0, 00:17:26.425 "w_mbytes_per_sec": 0 00:17:26.425 }, 00:17:26.425 "claimed": false, 00:17:26.425 "zoned": false, 00:17:26.425 "supported_io_types": { 00:17:26.425 "read": true, 00:17:26.425 "write": true, 00:17:26.425 "unmap": true, 00:17:26.425 "flush": true, 00:17:26.425 "reset": false, 00:17:26.425 "nvme_admin": false, 00:17:26.425 "nvme_io": false, 00:17:26.425 "nvme_io_md": false, 00:17:26.425 "write_zeroes": true, 00:17:26.425 "zcopy": false, 00:17:26.425 "get_zone_info": false, 00:17:26.425 "zone_management": false, 00:17:26.425 "zone_append": false, 00:17:26.425 "compare": false, 00:17:26.425 "compare_and_write": false, 00:17:26.425 "abort": false, 00:17:26.425 "seek_hole": false, 00:17:26.425 "seek_data": false, 00:17:26.425 "copy": false, 00:17:26.425 "nvme_iov_md": false 00:17:26.425 }, 00:17:26.425 "driver_specific": { 00:17:26.425 "ftl": { 00:17:26.425 "base_bdev": "490c0ae4-6fdd-47bc-b033-75d1ab83b92a", 00:17:26.425 "cache": "nvc0n1p0" 00:17:26.425 } 00:17:26.425 } 00:17:26.425 } 00:17:26.425 ] 00:17:26.425 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:26.425 08:38:48 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:26.425 08:38:48 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:26.684 08:38:48 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:26.684 08:38:48 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:26.945 [2024-11-19 08:38:48.603064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.603119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:26.945 [2024-11-19 08:38:48.603134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.945 [2024-11-19 08:38:48.603143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.603191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.945 [2024-11-19 08:38:48.603921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.603944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:26.945 [2024-11-19 08:38:48.603953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:17:26.945 [2024-11-19 08:38:48.603965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.604858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.604892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:26.945 [2024-11-19 08:38:48.604901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:17:26.945 [2024-11-19 08:38:48.604910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.607372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.607397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:26.945 [2024-11-19 08:38:48.607405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:17:26.945 [2024-11-19 08:38:48.607414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.612335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.612371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:26.945 [2024-11-19 08:38:48.612396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:17:26.945 [2024-11-19 08:38:48.612404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.616488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.616541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:26.945 [2024-11-19 08:38:48.616551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.925 ms 00:17:26.945 [2024-11-19 08:38:48.616561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.621538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.621581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:26.945 [2024-11-19 08:38:48.621593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.935 ms 00:17:26.945 [2024-11-19 08:38:48.621602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.945 [2024-11-19 08:38:48.621845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.945 [2024-11-19 08:38:48.621866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:26.946 [2024-11-19 08:38:48.621876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:17:26.946 [2024-11-19 08:38:48.621884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.946 [2024-11-19 08:38:48.623884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.946 [2024-11-19 08:38:48.623921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:26.946 [2024-11-19 08:38:48.623930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:17:26.946 [2024-11-19 08:38:48.623939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.946 [2024-11-19 08:38:48.625518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.946 [2024-11-19 08:38:48.625560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:26.946 [2024-11-19 08:38:48.625569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:17:26.946 [2024-11-19 08:38:48.625578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.946 [2024-11-19 08:38:48.626746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.946 [2024-11-19 08:38:48.626781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:26.946 [2024-11-19 08:38:48.626790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:17:26.946 [2024-11-19 08:38:48.626798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.946 [2024-11-19 08:38:48.628039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.946 [2024-11-19 08:38:48.628074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:26.946 [2024-11-19 08:38:48.628083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:17:26.946 [2024-11-19 08:38:48.628091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.946 [2024-11-19 08:38:48.628145] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:26.946 [2024-11-19 08:38:48.628162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:26.946 [2024-11-19 08:38:48.628841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.628997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:26.947 [2024-11-19 08:38:48.629103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:26.947 [2024-11-19 08:38:48.629111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95e01cfd-0408-4dec-9b3a-14a6a791ab14 00:17:26.947 [2024-11-19 08:38:48.629123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:26.947 [2024-11-19 08:38:48.629130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:26.947 [2024-11-19 08:38:48.629139] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:26.947 [2024-11-19 08:38:48.629146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:26.947 [2024-11-19 08:38:48.629169] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:26.947 [2024-11-19 08:38:48.629176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:26.947 [2024-11-19 08:38:48.629185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:26.947 [2024-11-19 08:38:48.629191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:26.947 [2024-11-19 08:38:48.629200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:26.947 [2024-11-19 08:38:48.629208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.947 [2024-11-19 08:38:48.629217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:26.947 [2024-11-19 08:38:48.629225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:17:26.947 [2024-11-19 08:38:48.629246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.631060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.947 [2024-11-19 08:38:48.631087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:26.947 [2024-11-19 08:38:48.631097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:17:26.947 [2024-11-19 08:38:48.631106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.631242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.947 [2024-11-19 08:38:48.631254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:26.947 [2024-11-19 08:38:48.631262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:26.947 [2024-11-19 08:38:48.631271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.637728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.637769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.947 [2024-11-19 08:38:48.637795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.637804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.637899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.637922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.947 [2024-11-19 08:38:48.637930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.637939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.638068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.638089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.947 [2024-11-19 08:38:48.638097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.638106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.638143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.638156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.947 [2024-11-19 08:38:48.638165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.638174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.651976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.652031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.947 [2024-11-19 08:38:48.652043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.652068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.660888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.660939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.947 [2024-11-19 08:38:48.660950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.660976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.661094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.661109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.947 [2024-11-19 08:38:48.661117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.661126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.661242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.661259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.947 [2024-11-19 08:38:48.661267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.661276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.661387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.661410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.947 [2024-11-19 08:38:48.661417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.661426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.661495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.661509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:26.947 [2024-11-19 08:38:48.661517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.661526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.661588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.661607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.947 [2024-11-19 08:38:48.661618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.661628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.661690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.947 [2024-11-19 08:38:48.661701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.947 [2024-11-19 08:38:48.661709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.947 [2024-11-19 08:38:48.661782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.947 [2024-11-19 08:38:48.662039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.033 ms, result 0 00:17:26.947 true 00:17:26.947 08:38:48 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85026 00:17:26.947 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 85026 ']' 00:17:26.947 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 85026 00:17:26.947 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:26.947 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.947 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85026 00:17:26.948 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.948 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.948 killing process with pid 85026 00:17:26.948 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85026' 00:17:26.948 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 85026 00:17:26.948 08:38:48 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 85026 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:32.226 08:38:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:32.226 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:32.226 fio-3.35 00:17:32.226 Starting 1 thread 00:17:36.422 00:17:36.422 test: (groupid=0, jobs=1): err= 0: pid=85207: Tue Nov 19 08:38:58 2024 00:17:36.422 read: IOPS=994, BW=66.0MiB/s (69.2MB/s)(255MiB/3855msec) 00:17:36.422 slat (nsec): min=4672, max=40334, avg=7156.73, stdev=2936.17 00:17:36.422 clat (usec): min=275, max=1319, avg=450.61, stdev=61.73 00:17:36.422 lat (usec): min=281, max=1331, avg=457.77, stdev=62.07 00:17:36.422 clat percentiles (usec): 00:17:36.422 | 1.00th=[ 314], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 388], 00:17:36.422 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:17:36.422 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 523], 95.00th=[ 529], 00:17:36.422 | 99.00th=[ 586], 99.50th=[ 660], 99.90th=[ 848], 99.95th=[ 1020], 00:17:36.422 | 99.99th=[ 1319] 00:17:36.422 write: IOPS=1001, BW=66.5MiB/s (69.7MB/s)(256MiB/3851msec); 0 zone resets 00:17:36.422 slat (usec): min=15, max=132, avg=22.32, stdev= 6.27 00:17:36.422 clat (usec): min=342, max=5003, avg=509.70, stdev=103.53 00:17:36.422 lat (usec): min=361, max=5032, avg=532.02, stdev=104.15 00:17:36.422 clat percentiles (usec): 00:17:36.422 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 429], 20.00th=[ 461], 00:17:36.422 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 515], 60.00th=[ 523], 00:17:36.422 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 603], 00:17:36.422 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 979], 99.95th=[ 1029], 00:17:36.422 | 99.99th=[ 5014] 00:17:36.422 bw ( KiB/s): min=63512, max=70040, per=99.61%, avg=67825.14, stdev=2065.10, samples=7 00:17:36.422 iops : min= 934, max= 1030, avg=997.43, stdev=30.37, samples=7 00:17:36.422 lat (usec) : 500=60.76%, 750=38.18%, 1000=0.99% 00:17:36.422 lat (msec) : 2=0.05%, 10=0.01% 00:17:36.422 cpu : usr=98.96%, sys=0.13%, ctx=6, majf=0, minf=1326 00:17:36.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:36.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.422 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:36.422 00:17:36.422 Run status group 0 (all jobs): 00:17:36.422 READ: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=255MiB (267MB), run=3855-3855msec 00:17:36.422 WRITE: bw=66.5MiB/s (69.7MB/s), 66.5MiB/s-66.5MiB/s (69.7MB/s-69.7MB/s), io=256MiB (269MB), run=3851-3851msec 00:17:36.990 ----------------------------------------------------- 00:17:36.990 Suppressions used: 00:17:36.990 count bytes template 00:17:36.990 1 5 /usr/src/fio/parse.c 00:17:36.990 1 8 libtcmalloc_minimal.so 00:17:36.990 1 904 libcrypto.so 00:17:36.990 ----------------------------------------------------- 00:17:36.990 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:36.990 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:37.248 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:37.249 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:37.249 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:37.249 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:37.249 08:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:37.249 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:37.249 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:37.249 fio-3.35 00:17:37.249 Starting 2 threads 00:18:09.337 00:18:09.337 first_half: (groupid=0, jobs=1): err= 0: pid=85294: Tue Nov 19 08:39:28 2024 00:18:09.337 read: IOPS=2282, BW=9131KiB/s (9350kB/s)(255MiB/28590msec) 00:18:09.337 slat (usec): min=3, max=106, avg= 9.72, stdev= 4.03 00:18:09.337 clat (usec): min=1422, max=289206, avg=42122.41, stdev=24403.33 00:18:09.337 lat (usec): min=1433, max=289211, avg=42132.14, stdev=24403.88 00:18:09.337 clat percentiles (msec): 00:18:09.337 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:18:09.337 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:18:09.337 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 53], 00:18:09.337 | 99.00th=[ 188], 99.50th=[ 207], 99.90th=[ 241], 99.95th=[ 253], 00:18:09.337 | 99.99th=[ 279] 00:18:09.337 write: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(256MiB/22817msec); 0 zone resets 00:18:09.337 slat (usec): min=4, max=1135, avg=10.92, stdev= 9.24 00:18:09.337 clat (usec): min=524, max=111281, avg=13770.66, stdev=22169.96 00:18:09.337 lat (usec): min=542, max=111305, avg=13781.59, stdev=22170.65 00:18:09.337 clat percentiles (usec): 00:18:09.337 | 1.00th=[ 1139], 5.00th=[ 1598], 10.00th=[ 1975], 20.00th=[ 2442], 00:18:09.337 | 30.00th=[ 3097], 40.00th=[ 4948], 50.00th=[ 7111], 60.00th=[ 8848], 00:18:09.337 | 70.00th=[ 10814], 80.00th=[ 13435], 90.00th=[ 36963], 95.00th=[ 84411], 00:18:09.337 | 99.00th=[100140], 99.50th=[103285], 99.90th=[107480], 99.95th=[109577], 00:18:09.337 | 99.99th=[110625] 00:18:09.337 bw ( KiB/s): min= 456, max=39128, per=84.51%, avg=19418.07, stdev=11561.85, samples=27 00:18:09.337 iops : min= 114, max= 9782, avg=4854.52, stdev=2890.46, samples=27 00:18:09.337 lat (usec) : 750=0.02%, 1000=0.20% 00:18:09.337 lat (msec) : 2=5.15%, 4=12.48%, 10=15.98%, 20=12.15%, 50=48.08% 00:18:09.337 lat (msec) : 100=3.77%, 250=2.12%, 500=0.03% 00:18:09.337 cpu : usr=99.16%, sys=0.20%, ctx=42, majf=0, minf=5567 00:18:09.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:09.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.337 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.337 issued rwts: total=65261,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.337 second_half: (groupid=0, jobs=1): err= 0: pid=85295: Tue Nov 19 08:39:28 2024 00:18:09.337 read: IOPS=2301, BW=9207KiB/s (9428kB/s)(255MiB/28374msec) 00:18:09.337 slat (nsec): min=3435, max=95077, avg=9261.01, stdev=4378.44 00:18:09.337 clat (usec): min=1080, max=297744, avg=43460.55, stdev=24999.05 00:18:09.337 lat (usec): min=1088, max=297752, avg=43469.81, stdev=24999.97 00:18:09.337 clat percentiles (msec): 00:18:09.337 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 37], 00:18:09.337 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:18:09.337 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 72], 00:18:09.337 | 99.00th=[ 194], 99.50th=[ 213], 99.90th=[ 257], 99.95th=[ 279], 00:18:09.337 | 99.99th=[ 296] 00:18:09.337 write: IOPS=2994, BW=11.7MiB/s (12.3MB/s)(256MiB/21882msec); 0 zone resets 00:18:09.337 slat (usec): min=4, max=464, avg=10.26, stdev= 6.61 00:18:09.337 clat (usec): min=456, max=111605, avg=12069.25, stdev=21041.15 00:18:09.337 lat (usec): min=474, max=111614, avg=12079.51, stdev=21041.45 00:18:09.337 clat percentiles (usec): 00:18:09.337 | 1.00th=[ 1221], 5.00th=[ 1696], 10.00th=[ 1975], 20.00th=[ 2343], 00:18:09.337 | 30.00th=[ 3064], 40.00th=[ 5407], 50.00th=[ 6783], 60.00th=[ 7963], 00:18:09.337 | 70.00th=[ 9110], 80.00th=[ 11469], 90.00th=[ 15008], 95.00th=[ 84411], 00:18:09.337 | 99.00th=[100140], 99.50th=[102237], 99.90th=[106431], 99.95th=[108528], 00:18:09.337 | 99.99th=[110625] 00:18:09.337 bw ( KiB/s): min= 2456, max=44880, per=87.75%, avg=20164.92, stdev=12889.49, samples=26 00:18:09.337 iops : min= 614, max=11220, avg=5041.23, stdev=3222.37, samples=26 00:18:09.337 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.09% 00:18:09.337 lat (msec) : 2=5.29%, 4=11.73%, 10=20.70%, 20=9.00%, 50=46.70% 00:18:09.337 lat (msec) : 100=4.49%, 250=1.93%, 500=0.07% 00:18:09.337 cpu : usr=99.28%, sys=0.13%, ctx=58, majf=0, minf=5569 00:18:09.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:09.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.337 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.337 issued rwts: total=65312,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.337 00:18:09.337 Run status group 0 (all jobs): 00:18:09.337 READ: bw=17.8MiB/s (18.7MB/s), 9131KiB/s-9207KiB/s (9350kB/s-9428kB/s), io=510MiB (535MB), run=28374-28590msec 00:18:09.337 WRITE: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.7MiB/s (11.8MB/s-12.3MB/s), io=512MiB (537MB), run=21882-22817msec 00:18:09.337 ----------------------------------------------------- 00:18:09.337 Suppressions used: 00:18:09.337 count bytes template 00:18:09.337 2 10 /usr/src/fio/parse.c 00:18:09.337 4 384 /usr/src/fio/iolog.c 00:18:09.337 1 8 libtcmalloc_minimal.so 00:18:09.337 1 904 libcrypto.so 00:18:09.337 ----------------------------------------------------- 00:18:09.337 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:09.337 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:09.338 08:39:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:09.338 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:09.338 fio-3.35 00:18:09.338 Starting 1 thread 00:18:24.257 00:18:24.257 test: (groupid=0, jobs=1): err= 0: pid=85655: Tue Nov 19 08:39:45 2024 00:18:24.257 read: IOPS=7075, BW=27.6MiB/s (29.0MB/s)(255MiB/9215msec) 00:18:24.257 slat (nsec): min=3451, max=82857, avg=6366.25, stdev=2682.76 00:18:24.257 clat (usec): min=685, max=33239, avg=18080.34, stdev=2478.72 00:18:24.257 lat (usec): min=689, max=33244, avg=18086.71, stdev=2479.70 00:18:24.257 clat percentiles (usec): 00:18:24.257 | 1.00th=[15795], 5.00th=[16057], 10.00th=[16188], 20.00th=[16319], 00:18:24.257 | 30.00th=[16450], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:18:24.257 | 70.00th=[17433], 80.00th=[21627], 90.00th=[22152], 95.00th=[22414], 00:18:24.257 | 99.00th=[22676], 99.50th=[22938], 99.90th=[25297], 99.95th=[28967], 00:18:24.257 | 99.99th=[32375] 00:18:24.257 write: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(256MiB/5286msec); 0 zone resets 00:18:24.257 slat (usec): min=4, max=1237, avg= 8.47, stdev= 8.73 00:18:24.257 clat (usec): min=678, max=57504, avg=10275.01, stdev=12326.33 00:18:24.257 lat (usec): min=686, max=57511, avg=10283.48, stdev=12326.37 00:18:24.257 clat percentiles (usec): 00:18:24.257 | 1.00th=[ 1057], 5.00th=[ 1303], 10.00th=[ 1450], 20.00th=[ 1631], 00:18:24.257 | 30.00th=[ 1811], 40.00th=[ 2147], 50.00th=[ 6718], 60.00th=[ 7701], 00:18:24.257 | 70.00th=[ 9110], 80.00th=[11600], 90.00th=[36439], 95.00th=[38011], 00:18:24.257 | 99.00th=[40633], 99.50th=[42206], 99.90th=[53740], 99.95th=[54789], 00:18:24.257 | 99.99th=[56361] 00:18:24.257 bw ( KiB/s): min=24672, max=66056, per=96.11%, avg=47662.55, stdev=11718.49, samples=11 00:18:24.257 iops : min= 6168, max=16514, avg=11915.64, stdev=2929.62, samples=11 00:18:24.257 lat (usec) : 750=0.01%, 1000=0.26% 00:18:24.257 lat (msec) : 2=18.53%, 4=2.28%, 10=15.87%, 20=41.29%, 50=21.67% 00:18:24.257 lat (msec) : 100=0.09% 00:18:24.257 cpu : usr=98.97%, sys=0.27%, ctx=28, majf=0, minf=5577 00:18:24.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:24.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.257 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.257 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.257 00:18:24.257 Run status group 0 (all jobs): 00:18:24.257 READ: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=255MiB (267MB), run=9215-9215msec 00:18:24.257 WRITE: bw=48.4MiB/s (50.8MB/s), 48.4MiB/s-48.4MiB/s (50.8MB/s-50.8MB/s), io=256MiB (268MB), run=5286-5286msec 00:18:24.517 ----------------------------------------------------- 00:18:24.517 Suppressions used: 00:18:24.517 count bytes template 00:18:24.517 1 5 /usr/src/fio/parse.c 00:18:24.517 2 192 /usr/src/fio/iolog.c 00:18:24.517 1 8 libtcmalloc_minimal.so 00:18:24.517 1 904 libcrypto.so 00:18:24.517 ----------------------------------------------------- 00:18:24.517 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:24.517 Remove shared memory files 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70176 /dev/shm/spdk_tgt_trace.pid83966 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:24.517 ************************************ 00:18:24.517 END TEST ftl_fio_basic 00:18:24.517 ************************************ 00:18:24.517 00:18:24.517 real 1m4.968s 00:18:24.517 user 2m28.426s 00:18:24.517 sys 0m3.305s 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.517 08:39:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.777 08:39:46 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:24.777 08:39:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:24.777 08:39:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.777 08:39:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:24.777 ************************************ 00:18:24.777 START TEST ftl_bdevperf 00:18:24.777 ************************************ 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:24.777 * Looking for test storage... 00:18:24.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.777 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:25.037 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.037 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.037 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.037 08:39:46 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:25.037 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.037 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.038 --rc genhtml_branch_coverage=1 00:18:25.038 --rc genhtml_function_coverage=1 00:18:25.038 --rc genhtml_legend=1 00:18:25.038 --rc geninfo_all_blocks=1 00:18:25.038 --rc geninfo_unexecuted_blocks=1 00:18:25.038 00:18:25.038 ' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.038 --rc genhtml_branch_coverage=1 00:18:25.038 --rc genhtml_function_coverage=1 00:18:25.038 --rc genhtml_legend=1 00:18:25.038 --rc geninfo_all_blocks=1 00:18:25.038 --rc geninfo_unexecuted_blocks=1 00:18:25.038 00:18:25.038 ' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.038 --rc genhtml_branch_coverage=1 00:18:25.038 --rc genhtml_function_coverage=1 00:18:25.038 --rc genhtml_legend=1 00:18:25.038 --rc geninfo_all_blocks=1 00:18:25.038 --rc geninfo_unexecuted_blocks=1 00:18:25.038 00:18:25.038 ' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.038 --rc genhtml_branch_coverage=1 00:18:25.038 --rc genhtml_function_coverage=1 00:18:25.038 --rc genhtml_legend=1 00:18:25.038 --rc geninfo_all_blocks=1 00:18:25.038 --rc geninfo_unexecuted_blocks=1 00:18:25.038 00:18:25.038 ' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:25.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85893 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85893 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 85893 ']' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.038 08:39:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:25.038 [2024-11-19 08:39:46.800054] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:25.038 [2024-11-19 08:39:46.800656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85893 ] 00:18:25.038 [2024-11-19 08:39:46.932696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.298 [2024-11-19 08:39:46.956927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:25.867 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:26.126 08:39:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:26.386 { 00:18:26.386 "name": "nvme0n1", 00:18:26.386 "aliases": [ 00:18:26.386 "89799844-726f-4966-8dc6-37cc83548bc7" 00:18:26.386 ], 00:18:26.386 "product_name": "NVMe disk", 00:18:26.386 "block_size": 4096, 00:18:26.386 "num_blocks": 1310720, 00:18:26.386 "uuid": "89799844-726f-4966-8dc6-37cc83548bc7", 00:18:26.386 "numa_id": -1, 00:18:26.386 "assigned_rate_limits": { 00:18:26.386 "rw_ios_per_sec": 0, 00:18:26.386 "rw_mbytes_per_sec": 0, 00:18:26.386 "r_mbytes_per_sec": 0, 00:18:26.386 "w_mbytes_per_sec": 0 00:18:26.386 }, 00:18:26.386 "claimed": true, 00:18:26.386 "claim_type": "read_many_write_one", 00:18:26.386 "zoned": false, 00:18:26.386 "supported_io_types": { 00:18:26.386 "read": true, 00:18:26.386 "write": true, 00:18:26.386 "unmap": true, 00:18:26.386 "flush": true, 00:18:26.386 "reset": true, 00:18:26.386 "nvme_admin": true, 00:18:26.386 "nvme_io": true, 00:18:26.386 "nvme_io_md": false, 00:18:26.386 "write_zeroes": true, 00:18:26.386 "zcopy": false, 00:18:26.386 "get_zone_info": false, 00:18:26.386 "zone_management": false, 00:18:26.386 "zone_append": false, 00:18:26.386 "compare": true, 00:18:26.386 "compare_and_write": false, 00:18:26.386 "abort": true, 00:18:26.386 "seek_hole": false, 00:18:26.386 "seek_data": false, 00:18:26.386 "copy": true, 00:18:26.386 "nvme_iov_md": false 00:18:26.386 }, 00:18:26.386 "driver_specific": { 00:18:26.386 "nvme": [ 00:18:26.386 { 00:18:26.386 "pci_address": "0000:00:11.0", 00:18:26.386 "trid": { 00:18:26.386 "trtype": "PCIe", 00:18:26.386 "traddr": "0000:00:11.0" 00:18:26.386 }, 00:18:26.386 "ctrlr_data": { 00:18:26.386 "cntlid": 0, 00:18:26.386 "vendor_id": "0x1b36", 00:18:26.386 "model_number": "QEMU NVMe Ctrl", 00:18:26.386 "serial_number": "12341", 00:18:26.386 "firmware_revision": "8.0.0", 00:18:26.386 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:26.386 "oacs": { 00:18:26.386 "security": 0, 00:18:26.386 "format": 1, 00:18:26.386 "firmware": 0, 00:18:26.386 "ns_manage": 1 00:18:26.386 }, 00:18:26.386 "multi_ctrlr": false, 00:18:26.386 "ana_reporting": false 00:18:26.386 }, 00:18:26.386 "vs": { 00:18:26.386 "nvme_version": "1.4" 00:18:26.386 }, 00:18:26.386 "ns_data": { 00:18:26.386 "id": 1, 00:18:26.386 "can_share": false 00:18:26.386 } 00:18:26.386 } 00:18:26.386 ], 00:18:26.386 "mp_policy": "active_passive" 00:18:26.386 } 00:18:26.386 } 00:18:26.386 ]' 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:26.386 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:26.646 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0c9d82c9-5054-428b-9434-90def728ef8e 00:18:26.646 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:26.646 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c9d82c9-5054-428b-9434-90def728ef8e 00:18:26.912 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:26.912 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ed96b701-742e-454a-9396-e21e1bc0fdd3 00:18:26.912 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ed96b701-742e-454a-9396-e21e1bc0fdd3 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:27.181 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:27.182 08:39:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:27.442 { 00:18:27.442 "name": "05d9cae1-0c48-4421-b320-76bb4eac8165", 00:18:27.442 "aliases": [ 00:18:27.442 "lvs/nvme0n1p0" 00:18:27.442 ], 00:18:27.442 "product_name": "Logical Volume", 00:18:27.442 "block_size": 4096, 00:18:27.442 "num_blocks": 26476544, 00:18:27.442 "uuid": "05d9cae1-0c48-4421-b320-76bb4eac8165", 00:18:27.442 "assigned_rate_limits": { 00:18:27.442 "rw_ios_per_sec": 0, 00:18:27.442 "rw_mbytes_per_sec": 0, 00:18:27.442 "r_mbytes_per_sec": 0, 00:18:27.442 "w_mbytes_per_sec": 0 00:18:27.442 }, 00:18:27.442 "claimed": false, 00:18:27.442 "zoned": false, 00:18:27.442 "supported_io_types": { 00:18:27.442 "read": true, 00:18:27.442 "write": true, 00:18:27.442 "unmap": true, 00:18:27.442 "flush": false, 00:18:27.442 "reset": true, 00:18:27.442 "nvme_admin": false, 00:18:27.442 "nvme_io": false, 00:18:27.442 "nvme_io_md": false, 00:18:27.442 "write_zeroes": true, 00:18:27.442 "zcopy": false, 00:18:27.442 "get_zone_info": false, 00:18:27.442 "zone_management": false, 00:18:27.442 "zone_append": false, 00:18:27.442 "compare": false, 00:18:27.442 "compare_and_write": false, 00:18:27.442 "abort": false, 00:18:27.442 "seek_hole": true, 00:18:27.442 "seek_data": true, 00:18:27.442 "copy": false, 00:18:27.442 "nvme_iov_md": false 00:18:27.442 }, 00:18:27.442 "driver_specific": { 00:18:27.442 "lvol": { 00:18:27.442 "lvol_store_uuid": "ed96b701-742e-454a-9396-e21e1bc0fdd3", 00:18:27.442 "base_bdev": "nvme0n1", 00:18:27.442 "thin_provision": true, 00:18:27.442 "num_allocated_clusters": 0, 00:18:27.442 "snapshot": false, 00:18:27.442 "clone": false, 00:18:27.442 "esnap_clone": false 00:18:27.442 } 00:18:27.442 } 00:18:27.442 } 00:18:27.442 ]' 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:27.442 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:27.701 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:27.701 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:27.701 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.702 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.702 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:27.702 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:27.702 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:27.702 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:27.961 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:27.962 { 00:18:27.962 "name": "05d9cae1-0c48-4421-b320-76bb4eac8165", 00:18:27.962 "aliases": [ 00:18:27.962 "lvs/nvme0n1p0" 00:18:27.962 ], 00:18:27.962 "product_name": "Logical Volume", 00:18:27.962 "block_size": 4096, 00:18:27.962 "num_blocks": 26476544, 00:18:27.962 "uuid": "05d9cae1-0c48-4421-b320-76bb4eac8165", 00:18:27.962 "assigned_rate_limits": { 00:18:27.962 "rw_ios_per_sec": 0, 00:18:27.962 "rw_mbytes_per_sec": 0, 00:18:27.962 "r_mbytes_per_sec": 0, 00:18:27.962 "w_mbytes_per_sec": 0 00:18:27.962 }, 00:18:27.962 "claimed": false, 00:18:27.962 "zoned": false, 00:18:27.962 "supported_io_types": { 00:18:27.962 "read": true, 00:18:27.962 "write": true, 00:18:27.962 "unmap": true, 00:18:27.962 "flush": false, 00:18:27.962 "reset": true, 00:18:27.962 "nvme_admin": false, 00:18:27.962 "nvme_io": false, 00:18:27.962 "nvme_io_md": false, 00:18:27.962 "write_zeroes": true, 00:18:27.962 "zcopy": false, 00:18:27.962 "get_zone_info": false, 00:18:27.962 "zone_management": false, 00:18:27.962 "zone_append": false, 00:18:27.962 "compare": false, 00:18:27.962 "compare_and_write": false, 00:18:27.962 "abort": false, 00:18:27.962 "seek_hole": true, 00:18:27.962 "seek_data": true, 00:18:27.962 "copy": false, 00:18:27.962 "nvme_iov_md": false 00:18:27.962 }, 00:18:27.962 "driver_specific": { 00:18:27.962 "lvol": { 00:18:27.962 "lvol_store_uuid": "ed96b701-742e-454a-9396-e21e1bc0fdd3", 00:18:27.962 "base_bdev": "nvme0n1", 00:18:27.962 "thin_provision": true, 00:18:27.962 "num_allocated_clusters": 0, 00:18:27.962 "snapshot": false, 00:18:27.962 "clone": false, 00:18:27.962 "esnap_clone": false 00:18:27.962 } 00:18:27.962 } 00:18:27.962 } 00:18:27.962 ]' 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:27.962 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:28.221 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05d9cae1-0c48-4421-b320-76bb4eac8165 00:18:28.481 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.481 { 00:18:28.481 "name": "05d9cae1-0c48-4421-b320-76bb4eac8165", 00:18:28.481 "aliases": [ 00:18:28.481 "lvs/nvme0n1p0" 00:18:28.481 ], 00:18:28.481 "product_name": "Logical Volume", 00:18:28.481 "block_size": 4096, 00:18:28.481 "num_blocks": 26476544, 00:18:28.481 "uuid": "05d9cae1-0c48-4421-b320-76bb4eac8165", 00:18:28.481 "assigned_rate_limits": { 00:18:28.481 "rw_ios_per_sec": 0, 00:18:28.481 "rw_mbytes_per_sec": 0, 00:18:28.481 "r_mbytes_per_sec": 0, 00:18:28.481 "w_mbytes_per_sec": 0 00:18:28.481 }, 00:18:28.481 "claimed": false, 00:18:28.481 "zoned": false, 00:18:28.481 "supported_io_types": { 00:18:28.481 "read": true, 00:18:28.481 "write": true, 00:18:28.481 "unmap": true, 00:18:28.481 "flush": false, 00:18:28.481 "reset": true, 00:18:28.481 "nvme_admin": false, 00:18:28.481 "nvme_io": false, 00:18:28.481 "nvme_io_md": false, 00:18:28.481 "write_zeroes": true, 00:18:28.481 "zcopy": false, 00:18:28.481 "get_zone_info": false, 00:18:28.481 "zone_management": false, 00:18:28.481 "zone_append": false, 00:18:28.481 "compare": false, 00:18:28.481 "compare_and_write": false, 00:18:28.481 "abort": false, 00:18:28.481 "seek_hole": true, 00:18:28.481 "seek_data": true, 00:18:28.481 "copy": false, 00:18:28.481 "nvme_iov_md": false 00:18:28.481 }, 00:18:28.481 "driver_specific": { 00:18:28.481 "lvol": { 00:18:28.481 "lvol_store_uuid": "ed96b701-742e-454a-9396-e21e1bc0fdd3", 00:18:28.481 "base_bdev": "nvme0n1", 00:18:28.481 "thin_provision": true, 00:18:28.481 "num_allocated_clusters": 0, 00:18:28.481 "snapshot": false, 00:18:28.481 "clone": false, 00:18:28.481 "esnap_clone": false 00:18:28.481 } 00:18:28.481 } 00:18:28.481 } 00:18:28.481 ]' 00:18:28.481 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.481 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.481 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.745 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:28.745 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:28.745 08:39:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:28.745 08:39:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:28.745 08:39:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 05d9cae1-0c48-4421-b320-76bb4eac8165 -c nvc0n1p0 --l2p_dram_limit 20 00:18:28.745 [2024-11-19 08:39:50.595633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.745 [2024-11-19 08:39:50.595777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:28.745 [2024-11-19 08:39:50.595834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.745 [2024-11-19 08:39:50.595857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.745 [2024-11-19 08:39:50.595949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.745 [2024-11-19 08:39:50.596031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.745 [2024-11-19 08:39:50.596077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:28.745 [2024-11-19 08:39:50.596115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.596158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:28.746 [2024-11-19 08:39:50.596506] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:28.746 [2024-11-19 08:39:50.596533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.596548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.746 [2024-11-19 08:39:50.596560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:18:28.746 [2024-11-19 08:39:50.596579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.596631] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1333246f-3491-4270-a134-825841d6eb41 00:18:28.746 [2024-11-19 08:39:50.598069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.598097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:28.746 [2024-11-19 08:39:50.598107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:28.746 [2024-11-19 08:39:50.598126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.605543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.605633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.746 [2024-11-19 08:39:50.605647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.390 ms 00:18:28.746 [2024-11-19 08:39:50.605660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.605823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.605841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.746 [2024-11-19 08:39:50.605849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:28.746 [2024-11-19 08:39:50.605862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.605925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.605942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:28.746 [2024-11-19 08:39:50.605949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:28.746 [2024-11-19 08:39:50.605959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.605981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.746 [2024-11-19 08:39:50.607656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.607694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.746 [2024-11-19 08:39:50.607706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:18:28.746 [2024-11-19 08:39:50.607735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.607768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.607776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:28.746 [2024-11-19 08:39:50.607788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:28.746 [2024-11-19 08:39:50.607795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.607812] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:28.746 [2024-11-19 08:39:50.607943] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:28.746 [2024-11-19 08:39:50.607958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:28.746 [2024-11-19 08:39:50.607969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:28.746 [2024-11-19 08:39:50.607980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:28.746 [2024-11-19 08:39:50.607989] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608001] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:28.746 [2024-11-19 08:39:50.608026] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:28.746 [2024-11-19 08:39:50.608035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:28.746 [2024-11-19 08:39:50.608044] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:28.746 [2024-11-19 08:39:50.608064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.608075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:28.746 [2024-11-19 08:39:50.608086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:18:28.746 [2024-11-19 08:39:50.608095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.608179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.746 [2024-11-19 08:39:50.608188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:28.746 [2024-11-19 08:39:50.608199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:28.746 [2024-11-19 08:39:50.608207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.746 [2024-11-19 08:39:50.608303] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:28.746 [2024-11-19 08:39:50.608319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:28.746 [2024-11-19 08:39:50.608331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:28.746 [2024-11-19 08:39:50.608356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:28.746 [2024-11-19 08:39:50.608382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.746 [2024-11-19 08:39:50.608399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:28.746 [2024-11-19 08:39:50.608406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:28.746 [2024-11-19 08:39:50.608417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.746 [2024-11-19 08:39:50.608424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:28.746 [2024-11-19 08:39:50.608433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:28.746 [2024-11-19 08:39:50.608440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:28.746 [2024-11-19 08:39:50.608456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:28.746 [2024-11-19 08:39:50.608480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:28.746 [2024-11-19 08:39:50.608503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:28.746 [2024-11-19 08:39:50.608526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:28.746 [2024-11-19 08:39:50.608532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.746 [2024-11-19 08:39:50.608543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:28.746 [2024-11-19 08:39:50.608550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:28.747 [2024-11-19 08:39:50.608558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.747 [2024-11-19 08:39:50.608565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:28.747 [2024-11-19 08:39:50.608583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:28.747 [2024-11-19 08:39:50.608590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.747 [2024-11-19 08:39:50.608598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:28.747 [2024-11-19 08:39:50.608605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:28.747 [2024-11-19 08:39:50.608615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.747 [2024-11-19 08:39:50.608623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:28.747 [2024-11-19 08:39:50.608631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:28.747 [2024-11-19 08:39:50.608638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.747 [2024-11-19 08:39:50.608647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:28.747 [2024-11-19 08:39:50.608654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:28.747 [2024-11-19 08:39:50.608663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.747 [2024-11-19 08:39:50.608670] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:28.747 [2024-11-19 08:39:50.608682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:28.747 [2024-11-19 08:39:50.608689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.747 [2024-11-19 08:39:50.608699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.747 [2024-11-19 08:39:50.608707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:28.747 [2024-11-19 08:39:50.608801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:28.747 [2024-11-19 08:39:50.608829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:28.747 [2024-11-19 08:39:50.608854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:28.747 [2024-11-19 08:39:50.608891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:28.747 [2024-11-19 08:39:50.608944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:28.747 [2024-11-19 08:39:50.608984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:28.747 [2024-11-19 08:39:50.609045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:28.747 [2024-11-19 08:39:50.609196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:28.747 [2024-11-19 08:39:50.609237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:28.747 [2024-11-19 08:39:50.609293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:28.747 [2024-11-19 08:39:50.609342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:28.747 [2024-11-19 08:39:50.609354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:28.747 [2024-11-19 08:39:50.609362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:28.747 [2024-11-19 08:39:50.609372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:28.747 [2024-11-19 08:39:50.609379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:28.747 [2024-11-19 08:39:50.609399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:28.747 [2024-11-19 08:39:50.609439] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:28.747 [2024-11-19 08:39:50.609450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:28.747 [2024-11-19 08:39:50.609472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:28.747 [2024-11-19 08:39:50.609479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:28.747 [2024-11-19 08:39:50.609488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:28.747 [2024-11-19 08:39:50.609498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.747 [2024-11-19 08:39:50.609510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:28.747 [2024-11-19 08:39:50.609518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:18:28.747 [2024-11-19 08:39:50.609537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.747 [2024-11-19 08:39:50.609577] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:28.747 [2024-11-19 08:39:50.609588] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:32.939 [2024-11-19 08:39:54.331164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.331239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:32.939 [2024-11-19 08:39:54.331254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3728.763 ms 00:18:32.939 [2024-11-19 08:39:54.331275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.342771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.342924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.939 [2024-11-19 08:39:54.342941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.412 ms 00:18:32.939 [2024-11-19 08:39:54.342955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.343122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.343141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:32.939 [2024-11-19 08:39:54.343149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:32.939 [2024-11-19 08:39:54.343161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.369040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.369165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.939 [2024-11-19 08:39:54.369208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.880 ms 00:18:32.939 [2024-11-19 08:39:54.369245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.369330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.369368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.939 [2024-11-19 08:39:54.369442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:32.939 [2024-11-19 08:39:54.369477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.370287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.370346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.939 [2024-11-19 08:39:54.370368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:18:32.939 [2024-11-19 08:39:54.370395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.370632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.370666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.939 [2024-11-19 08:39:54.370704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:18:32.939 [2024-11-19 08:39:54.370765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.380369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.380425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.939 [2024-11-19 08:39:54.380442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.582 ms 00:18:32.939 [2024-11-19 08:39:54.380460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.391553] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:32.939 [2024-11-19 08:39:54.398027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.398063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:32.939 [2024-11-19 08:39:54.398078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.449 ms 00:18:32.939 [2024-11-19 08:39:54.398087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.475353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.475484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:32.939 [2024-11-19 08:39:54.475532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.366 ms 00:18:32.939 [2024-11-19 08:39:54.475541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.475761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.475775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:32.939 [2024-11-19 08:39:54.475785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:18:32.939 [2024-11-19 08:39:54.475792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.479582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.479618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:32.939 [2024-11-19 08:39:54.479630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:18:32.939 [2024-11-19 08:39:54.479638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.482311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.482377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:32.939 [2024-11-19 08:39:54.482394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:18:32.939 [2024-11-19 08:39:54.482417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.482681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.482702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:32.939 [2024-11-19 08:39:54.482731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:18:32.939 [2024-11-19 08:39:54.482740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.518212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.518258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:32.939 [2024-11-19 08:39:54.518273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.500 ms 00:18:32.939 [2024-11-19 08:39:54.518294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.522706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.522754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:32.939 [2024-11-19 08:39:54.522767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.362 ms 00:18:32.939 [2024-11-19 08:39:54.522775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.526066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.526114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:32.939 [2024-11-19 08:39:54.526130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:18:32.939 [2024-11-19 08:39:54.526139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.529677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.529808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:32.939 [2024-11-19 08:39:54.529845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:18:32.939 [2024-11-19 08:39:54.529855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.529908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.529931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:32.939 [2024-11-19 08:39:54.529946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:32.939 [2024-11-19 08:39:54.529955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.530036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.939 [2024-11-19 08:39:54.530046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:32.939 [2024-11-19 08:39:54.530058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:32.939 [2024-11-19 08:39:54.530067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.939 [2024-11-19 08:39:54.531205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3942.602 ms, result 0 00:18:32.939 { 00:18:32.939 "name": "ftl0", 00:18:32.939 "uuid": "1333246f-3491-4270-a134-825841d6eb41" 00:18:32.939 } 00:18:32.939 08:39:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:32.939 08:39:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:32.939 08:39:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:32.939 08:39:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:33.198 [2024-11-19 08:39:54.877149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:33.198 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:33.198 Zero copy mechanism will not be used. 00:18:33.198 Running I/O for 4 seconds... 00:18:35.068 1626.00 IOPS, 107.98 MiB/s [2024-11-19T08:39:57.912Z] 1650.00 IOPS, 109.57 MiB/s [2024-11-19T08:39:59.293Z] 1680.33 IOPS, 111.58 MiB/s [2024-11-19T08:39:59.293Z] 1711.25 IOPS, 113.64 MiB/s 00:18:37.386 Latency(us) 00:18:37.386 [2024-11-19T08:39:59.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.386 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:37.386 ftl0 : 4.00 1710.79 113.61 0.00 0.00 616.67 225.37 2189.30 00:18:37.386 [2024-11-19T08:39:59.293Z] =================================================================================================================== 00:18:37.386 [2024-11-19T08:39:59.293Z] Total : 1710.79 113.61 0.00 0.00 616.67 225.37 2189.30 00:18:37.386 { 00:18:37.386 "results": [ 00:18:37.386 { 00:18:37.386 "job": "ftl0", 00:18:37.386 "core_mask": "0x1", 00:18:37.386 "workload": "randwrite", 00:18:37.386 "status": "finished", 00:18:37.386 "queue_depth": 1, 00:18:37.386 "io_size": 69632, 00:18:37.386 "runtime": 4.001655, 00:18:37.386 "iops": 1710.792159743906, 00:18:37.386 "mibps": 113.60729185799376, 00:18:37.386 "io_failed": 0, 00:18:37.386 "io_timeout": 0, 00:18:37.386 "avg_latency_us": 616.667094800521, 00:18:37.386 "min_latency_us": 225.3694323144105, 00:18:37.386 "max_latency_us": 2189.303056768559 00:18:37.386 } 00:18:37.386 ], 00:18:37.386 "core_count": 1 00:18:37.386 } 00:18:37.386 [2024-11-19 08:39:58.876891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:37.386 08:39:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:37.386 [2024-11-19 08:39:58.997341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:37.386 Running I/O for 4 seconds... 00:18:39.407 10640.00 IOPS, 41.56 MiB/s [2024-11-19T08:40:02.258Z] 10363.00 IOPS, 40.48 MiB/s [2024-11-19T08:40:03.194Z] 10327.33 IOPS, 40.34 MiB/s [2024-11-19T08:40:03.194Z] 10319.75 IOPS, 40.31 MiB/s 00:18:41.287 Latency(us) 00:18:41.287 [2024-11-19T08:40:03.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.287 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.287 ftl0 : 4.02 10309.34 40.27 0.00 0.00 12389.86 284.39 35486.74 00:18:41.287 [2024-11-19T08:40:03.194Z] =================================================================================================================== 00:18:41.287 [2024-11-19T08:40:03.194Z] Total : 10309.34 40.27 0.00 0.00 12389.86 0.00 35486.74 00:18:41.287 [2024-11-19 08:40:03.011933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:41.287 { 00:18:41.287 "results": [ 00:18:41.287 { 00:18:41.287 "job": "ftl0", 00:18:41.287 "core_mask": "0x1", 00:18:41.287 "workload": "randwrite", 00:18:41.287 "status": "finished", 00:18:41.287 "queue_depth": 128, 00:18:41.287 "io_size": 4096, 00:18:41.287 "runtime": 4.016067, 00:18:41.287 "iops": 10309.339958720808, 00:18:41.287 "mibps": 40.270859213753155, 00:18:41.287 "io_failed": 0, 00:18:41.287 "io_timeout": 0, 00:18:41.287 "avg_latency_us": 12389.858098884677, 00:18:41.287 "min_latency_us": 284.3947598253275, 00:18:41.287 "max_latency_us": 35486.742358078605 00:18:41.287 } 00:18:41.287 ], 00:18:41.287 "core_count": 1 00:18:41.287 } 00:18:41.287 08:40:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:41.287 [2024-11-19 08:40:03.128161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:41.287 Running I/O for 4 seconds... 00:18:43.231 8225.00 IOPS, 32.13 MiB/s [2024-11-19T08:40:06.514Z] 8319.00 IOPS, 32.50 MiB/s [2024-11-19T08:40:07.451Z] 8343.67 IOPS, 32.59 MiB/s [2024-11-19T08:40:07.451Z] 8214.75 IOPS, 32.09 MiB/s 00:18:45.544 Latency(us) 00:18:45.544 [2024-11-19T08:40:07.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.544 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:45.544 Verification LBA range: start 0x0 length 0x1400000 00:18:45.544 ftl0 : 4.01 8225.72 32.13 0.00 0.00 15512.57 282.61 44644.61 00:18:45.544 [2024-11-19T08:40:07.451Z] =================================================================================================================== 00:18:45.544 [2024-11-19T08:40:07.451Z] Total : 8225.72 32.13 0.00 0.00 15512.57 0.00 44644.61 00:18:45.544 [2024-11-19 08:40:07.137581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:45.544 { 00:18:45.544 "results": [ 00:18:45.544 { 00:18:45.544 "job": "ftl0", 00:18:45.544 "core_mask": "0x1", 00:18:45.544 "workload": "verify", 00:18:45.544 "status": "finished", 00:18:45.544 "verify_range": { 00:18:45.544 "start": 0, 00:18:45.544 "length": 20971520 00:18:45.544 }, 00:18:45.544 "queue_depth": 128, 00:18:45.544 "io_size": 4096, 00:18:45.544 "runtime": 4.010227, 00:18:45.544 "iops": 8225.71889321976, 00:18:45.544 "mibps": 32.13171442663969, 00:18:45.544 "io_failed": 0, 00:18:45.544 "io_timeout": 0, 00:18:45.544 "avg_latency_us": 15512.571766964438, 00:18:45.544 "min_latency_us": 282.6061135371179, 00:18:45.544 "max_latency_us": 44644.61135371179 00:18:45.544 } 00:18:45.544 ], 00:18:45.544 "core_count": 1 00:18:45.544 } 00:18:45.544 08:40:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:45.544 [2024-11-19 08:40:07.344880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.544 [2024-11-19 08:40:07.344945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.545 [2024-11-19 08:40:07.344960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.545 [2024-11-19 08:40:07.344968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.545 [2024-11-19 08:40:07.344992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.545 [2024-11-19 08:40:07.345631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.545 [2024-11-19 08:40:07.345646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.545 [2024-11-19 08:40:07.345657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:18:45.545 [2024-11-19 08:40:07.345673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.545 [2024-11-19 08:40:07.347674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.545 [2024-11-19 08:40:07.347735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.545 [2024-11-19 08:40:07.347748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:18:45.545 [2024-11-19 08:40:07.347762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.804 [2024-11-19 08:40:07.567665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.804 [2024-11-19 08:40:07.567753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.804 [2024-11-19 08:40:07.567771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 220.293 ms 00:18:45.804 [2024-11-19 08:40:07.567786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.804 [2024-11-19 08:40:07.572843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.804 [2024-11-19 08:40:07.572881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.804 [2024-11-19 08:40:07.572890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 00:18:45.804 [2024-11-19 08:40:07.572900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.804 true 00:18:45.804 [2024-11-19 08:40:07.574712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.804 [2024-11-19 08:40:07.574767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.804 [2024-11-19 08:40:07.574777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.771 ms 00:18:45.804 [2024-11-19 08:40:07.574786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.804 [2024-11-19 08:40:07.579398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.804 [2024-11-19 08:40:07.579453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.804 [2024-11-19 08:40:07.579481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:18:45.804 [2024-11-19 08:40:07.579499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.804 [2024-11-19 08:40:07.579593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.804 [2024-11-19 08:40:07.579605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.804 [2024-11-19 08:40:07.579613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:45.804 [2024-11-19 08:40:07.579622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.805 [2024-11-19 08:40:07.581858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.805 [2024-11-19 08:40:07.581950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:45.805 [2024-11-19 08:40:07.581963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:18:45.805 [2024-11-19 08:40:07.581971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.805 [2024-11-19 08:40:07.583603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.805 [2024-11-19 08:40:07.583641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:45.805 [2024-11-19 08:40:07.583650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:18:45.805 [2024-11-19 08:40:07.583658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.805 [2024-11-19 08:40:07.584835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.805 [2024-11-19 08:40:07.584868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.805 [2024-11-19 08:40:07.584876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:18:45.805 [2024-11-19 08:40:07.584886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.805 [2024-11-19 08:40:07.586010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.805 [2024-11-19 08:40:07.586046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.805 [2024-11-19 08:40:07.586055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:18:45.805 [2024-11-19 08:40:07.586062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.805 [2024-11-19 08:40:07.586085] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.805 [2024-11-19 08:40:07.586118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.805 [2024-11-19 08:40:07.586843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.806 [2024-11-19 08:40:07.586993] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.806 [2024-11-19 08:40:07.587024] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1333246f-3491-4270-a134-825841d6eb41 00:18:45.806 [2024-11-19 08:40:07.587035] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.806 [2024-11-19 08:40:07.587050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.806 [2024-11-19 08:40:07.587068] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.806 [2024-11-19 08:40:07.587076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.806 [2024-11-19 08:40:07.587086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.806 [2024-11-19 08:40:07.587094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.806 [2024-11-19 08:40:07.587106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.806 [2024-11-19 08:40:07.587113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.806 [2024-11-19 08:40:07.587121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.806 [2024-11-19 08:40:07.587128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.806 [2024-11-19 08:40:07.587137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.806 [2024-11-19 08:40:07.587148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:18:45.806 [2024-11-19 08:40:07.587157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.588858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.806 [2024-11-19 08:40:07.588881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.806 [2024-11-19 08:40:07.588890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:18:45.806 [2024-11-19 08:40:07.588899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.589009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.806 [2024-11-19 08:40:07.589020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.806 [2024-11-19 08:40:07.589029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:45.806 [2024-11-19 08:40:07.589042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.595032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.595063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.806 [2024-11-19 08:40:07.595072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.595081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.595127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.595136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.806 [2024-11-19 08:40:07.595144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.595155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.595204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.595217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.806 [2024-11-19 08:40:07.595224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.595242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.595257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.595266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.806 [2024-11-19 08:40:07.595273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.595283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.608837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.608958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.806 [2024-11-19 08:40:07.608989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.609011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.617455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.617554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.806 [2024-11-19 08:40:07.617584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.617609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.617701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.617784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.806 [2024-11-19 08:40:07.617828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.617854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.617962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.618001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.806 [2024-11-19 08:40:07.618030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.618056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.618173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.618216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.806 [2024-11-19 08:40:07.618244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.618274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.618341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.618378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.806 [2024-11-19 08:40:07.618414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.618437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.618516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.618547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.806 [2024-11-19 08:40:07.618586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.618615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.618701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.806 [2024-11-19 08:40:07.618757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.806 [2024-11-19 08:40:07.618788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.806 [2024-11-19 08:40:07.618813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.806 [2024-11-19 08:40:07.618966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.572 ms, result 0 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85893 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 85893 ']' 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 85893 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85893 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.806 killing process with pid 85893 00:18:45.806 Received shutdown signal, test time was about 4.000000 seconds 00:18:45.806 00:18:45.806 Latency(us) 00:18:45.806 [2024-11-19T08:40:07.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.806 [2024-11-19T08:40:07.713Z] =================================================================================================================== 00:18:45.806 [2024-11-19T08:40:07.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85893' 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 85893 00:18:45.806 08:40:07 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 85893 00:18:50.006 Remove shared memory files 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:50.006 ************************************ 00:18:50.006 END TEST ftl_bdevperf 00:18:50.006 ************************************ 00:18:50.006 00:18:50.006 real 0m24.958s 00:18:50.006 user 0m27.686s 00:18:50.006 sys 0m1.080s 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.006 08:40:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:50.006 08:40:11 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:50.006 08:40:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:50.006 08:40:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.006 08:40:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:50.006 ************************************ 00:18:50.006 START TEST ftl_trim 00:18:50.006 ************************************ 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:50.006 * Looking for test storage... 00:18:50.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.006 08:40:11 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.006 --rc genhtml_branch_coverage=1 00:18:50.006 --rc genhtml_function_coverage=1 00:18:50.006 --rc genhtml_legend=1 00:18:50.006 --rc geninfo_all_blocks=1 00:18:50.006 --rc geninfo_unexecuted_blocks=1 00:18:50.006 00:18:50.006 ' 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.006 --rc genhtml_branch_coverage=1 00:18:50.006 --rc genhtml_function_coverage=1 00:18:50.006 --rc genhtml_legend=1 00:18:50.006 --rc geninfo_all_blocks=1 00:18:50.006 --rc geninfo_unexecuted_blocks=1 00:18:50.006 00:18:50.006 ' 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.006 --rc genhtml_branch_coverage=1 00:18:50.006 --rc genhtml_function_coverage=1 00:18:50.006 --rc genhtml_legend=1 00:18:50.006 --rc geninfo_all_blocks=1 00:18:50.006 --rc geninfo_unexecuted_blocks=1 00:18:50.006 00:18:50.006 ' 00:18:50.006 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:50.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.006 --rc genhtml_branch_coverage=1 00:18:50.006 --rc genhtml_function_coverage=1 00:18:50.006 --rc genhtml_legend=1 00:18:50.006 --rc geninfo_all_blocks=1 00:18:50.006 --rc geninfo_unexecuted_blocks=1 00:18:50.006 00:18:50.006 ' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86250 00:18:50.006 08:40:11 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:50.007 08:40:11 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86250 00:18:50.007 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 86250 ']' 00:18:50.007 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.007 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.007 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.007 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.007 08:40:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:50.007 [2024-11-19 08:40:11.800325] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:50.007 [2024-11-19 08:40:11.800465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86250 ] 00:18:50.266 [2024-11-19 08:40:11.939786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:50.266 [2024-11-19 08:40:11.966314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.266 [2024-11-19 08:40:11.966346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.266 [2024-11-19 08:40:11.966384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.837 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.837 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:50.837 08:40:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:50.837 08:40:12 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:50.837 08:40:12 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:50.837 08:40:12 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:50.837 08:40:12 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:50.837 08:40:12 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:51.098 08:40:12 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:51.098 08:40:12 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:51.098 08:40:12 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:51.098 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:51.098 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:51.098 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:51.098 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:51.098 08:40:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:51.358 { 00:18:51.358 "name": "nvme0n1", 00:18:51.358 "aliases": [ 00:18:51.358 "90b98ab3-44bc-4301-b7a9-685b6823657f" 00:18:51.358 ], 00:18:51.358 "product_name": "NVMe disk", 00:18:51.358 "block_size": 4096, 00:18:51.358 "num_blocks": 1310720, 00:18:51.358 "uuid": "90b98ab3-44bc-4301-b7a9-685b6823657f", 00:18:51.358 "numa_id": -1, 00:18:51.358 "assigned_rate_limits": { 00:18:51.358 "rw_ios_per_sec": 0, 00:18:51.358 "rw_mbytes_per_sec": 0, 00:18:51.358 "r_mbytes_per_sec": 0, 00:18:51.358 "w_mbytes_per_sec": 0 00:18:51.358 }, 00:18:51.358 "claimed": true, 00:18:51.358 "claim_type": "read_many_write_one", 00:18:51.358 "zoned": false, 00:18:51.358 "supported_io_types": { 00:18:51.358 "read": true, 00:18:51.358 "write": true, 00:18:51.358 "unmap": true, 00:18:51.358 "flush": true, 00:18:51.358 "reset": true, 00:18:51.358 "nvme_admin": true, 00:18:51.358 "nvme_io": true, 00:18:51.358 "nvme_io_md": false, 00:18:51.358 "write_zeroes": true, 00:18:51.358 "zcopy": false, 00:18:51.358 "get_zone_info": false, 00:18:51.358 "zone_management": false, 00:18:51.358 "zone_append": false, 00:18:51.358 "compare": true, 00:18:51.358 "compare_and_write": false, 00:18:51.358 "abort": true, 00:18:51.358 "seek_hole": false, 00:18:51.358 "seek_data": false, 00:18:51.358 "copy": true, 00:18:51.358 "nvme_iov_md": false 00:18:51.358 }, 00:18:51.358 "driver_specific": { 00:18:51.358 "nvme": [ 00:18:51.358 { 00:18:51.358 "pci_address": "0000:00:11.0", 00:18:51.358 "trid": { 00:18:51.358 "trtype": "PCIe", 00:18:51.358 "traddr": "0000:00:11.0" 00:18:51.358 }, 00:18:51.358 "ctrlr_data": { 00:18:51.358 "cntlid": 0, 00:18:51.358 "vendor_id": "0x1b36", 00:18:51.358 "model_number": "QEMU NVMe Ctrl", 00:18:51.358 "serial_number": "12341", 00:18:51.358 "firmware_revision": "8.0.0", 00:18:51.358 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:51.358 "oacs": { 00:18:51.358 "security": 0, 00:18:51.358 "format": 1, 00:18:51.358 "firmware": 0, 00:18:51.358 "ns_manage": 1 00:18:51.358 }, 00:18:51.358 "multi_ctrlr": false, 00:18:51.358 "ana_reporting": false 00:18:51.358 }, 00:18:51.358 "vs": { 00:18:51.358 "nvme_version": "1.4" 00:18:51.358 }, 00:18:51.358 "ns_data": { 00:18:51.358 "id": 1, 00:18:51.358 "can_share": false 00:18:51.358 } 00:18:51.358 } 00:18:51.358 ], 00:18:51.358 "mp_policy": "active_passive" 00:18:51.358 } 00:18:51.358 } 00:18:51.358 ]' 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:51.358 08:40:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:51.358 08:40:13 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:51.358 08:40:13 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:51.358 08:40:13 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:51.358 08:40:13 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:51.358 08:40:13 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:51.617 08:40:13 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ed96b701-742e-454a-9396-e21e1bc0fdd3 00:18:51.617 08:40:13 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:51.617 08:40:13 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed96b701-742e-454a-9396-e21e1bc0fdd3 00:18:51.877 08:40:13 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:52.137 08:40:13 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3f3e11eb-d663-4ea0-9d64-a39c34dd8c67 00:18:52.137 08:40:13 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3f3e11eb-d663-4ea0-9d64-a39c34dd8c67 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:52.137 08:40:14 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.137 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.137 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.137 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:52.137 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:52.137 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.397 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.397 { 00:18:52.397 "name": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:52.397 "aliases": [ 00:18:52.397 "lvs/nvme0n1p0" 00:18:52.397 ], 00:18:52.397 "product_name": "Logical Volume", 00:18:52.397 "block_size": 4096, 00:18:52.397 "num_blocks": 26476544, 00:18:52.397 "uuid": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:52.397 "assigned_rate_limits": { 00:18:52.397 "rw_ios_per_sec": 0, 00:18:52.397 "rw_mbytes_per_sec": 0, 00:18:52.397 "r_mbytes_per_sec": 0, 00:18:52.397 "w_mbytes_per_sec": 0 00:18:52.397 }, 00:18:52.397 "claimed": false, 00:18:52.397 "zoned": false, 00:18:52.397 "supported_io_types": { 00:18:52.397 "read": true, 00:18:52.397 "write": true, 00:18:52.397 "unmap": true, 00:18:52.397 "flush": false, 00:18:52.397 "reset": true, 00:18:52.397 "nvme_admin": false, 00:18:52.397 "nvme_io": false, 00:18:52.397 "nvme_io_md": false, 00:18:52.397 "write_zeroes": true, 00:18:52.397 "zcopy": false, 00:18:52.397 "get_zone_info": false, 00:18:52.397 "zone_management": false, 00:18:52.397 "zone_append": false, 00:18:52.397 "compare": false, 00:18:52.397 "compare_and_write": false, 00:18:52.397 "abort": false, 00:18:52.397 "seek_hole": true, 00:18:52.397 "seek_data": true, 00:18:52.397 "copy": false, 00:18:52.397 "nvme_iov_md": false 00:18:52.397 }, 00:18:52.397 "driver_specific": { 00:18:52.397 "lvol": { 00:18:52.397 "lvol_store_uuid": "3f3e11eb-d663-4ea0-9d64-a39c34dd8c67", 00:18:52.397 "base_bdev": "nvme0n1", 00:18:52.397 "thin_provision": true, 00:18:52.397 "num_allocated_clusters": 0, 00:18:52.397 "snapshot": false, 00:18:52.397 "clone": false, 00:18:52.397 "esnap_clone": false 00:18:52.397 } 00:18:52.397 } 00:18:52.397 } 00:18:52.397 ]' 00:18:52.397 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:52.397 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:52.397 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:52.657 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:52.657 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:52.657 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:52.657 08:40:14 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:52.657 08:40:14 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:52.657 08:40:14 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:52.917 08:40:14 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:52.917 08:40:14 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:52.917 08:40:14 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0deede6-f487-4388-994a-df28283aeaf0 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.917 { 00:18:52.917 "name": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:52.917 "aliases": [ 00:18:52.917 "lvs/nvme0n1p0" 00:18:52.917 ], 00:18:52.917 "product_name": "Logical Volume", 00:18:52.917 "block_size": 4096, 00:18:52.917 "num_blocks": 26476544, 00:18:52.917 "uuid": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:52.917 "assigned_rate_limits": { 00:18:52.917 "rw_ios_per_sec": 0, 00:18:52.917 "rw_mbytes_per_sec": 0, 00:18:52.917 "r_mbytes_per_sec": 0, 00:18:52.917 "w_mbytes_per_sec": 0 00:18:52.917 }, 00:18:52.917 "claimed": false, 00:18:52.917 "zoned": false, 00:18:52.917 "supported_io_types": { 00:18:52.917 "read": true, 00:18:52.917 "write": true, 00:18:52.917 "unmap": true, 00:18:52.917 "flush": false, 00:18:52.917 "reset": true, 00:18:52.917 "nvme_admin": false, 00:18:52.917 "nvme_io": false, 00:18:52.917 "nvme_io_md": false, 00:18:52.917 "write_zeroes": true, 00:18:52.917 "zcopy": false, 00:18:52.917 "get_zone_info": false, 00:18:52.917 "zone_management": false, 00:18:52.917 "zone_append": false, 00:18:52.917 "compare": false, 00:18:52.917 "compare_and_write": false, 00:18:52.917 "abort": false, 00:18:52.917 "seek_hole": true, 00:18:52.917 "seek_data": true, 00:18:52.917 "copy": false, 00:18:52.917 "nvme_iov_md": false 00:18:52.917 }, 00:18:52.917 "driver_specific": { 00:18:52.917 "lvol": { 00:18:52.917 "lvol_store_uuid": "3f3e11eb-d663-4ea0-9d64-a39c34dd8c67", 00:18:52.917 "base_bdev": "nvme0n1", 00:18:52.917 "thin_provision": true, 00:18:52.917 "num_allocated_clusters": 0, 00:18:52.917 "snapshot": false, 00:18:52.917 "clone": false, 00:18:52.917 "esnap_clone": false 00:18:52.917 } 00:18:52.917 } 00:18:52.917 } 00:18:52.917 ]' 00:18:52.917 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:53.177 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:53.177 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:53.177 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:53.177 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:53.177 08:40:14 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:53.177 08:40:14 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:53.177 08:40:14 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:53.177 08:40:15 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:53.177 08:40:15 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:53.177 08:40:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b0deede6-f487-4388-994a-df28283aeaf0 00:18:53.177 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b0deede6-f487-4388-994a-df28283aeaf0 00:18:53.177 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:53.177 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:53.177 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:53.177 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0deede6-f487-4388-994a-df28283aeaf0 00:18:53.437 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:53.437 { 00:18:53.437 "name": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:53.437 "aliases": [ 00:18:53.437 "lvs/nvme0n1p0" 00:18:53.437 ], 00:18:53.437 "product_name": "Logical Volume", 00:18:53.437 "block_size": 4096, 00:18:53.437 "num_blocks": 26476544, 00:18:53.437 "uuid": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:53.437 "assigned_rate_limits": { 00:18:53.437 "rw_ios_per_sec": 0, 00:18:53.437 "rw_mbytes_per_sec": 0, 00:18:53.437 "r_mbytes_per_sec": 0, 00:18:53.437 "w_mbytes_per_sec": 0 00:18:53.437 }, 00:18:53.437 "claimed": false, 00:18:53.437 "zoned": false, 00:18:53.437 "supported_io_types": { 00:18:53.437 "read": true, 00:18:53.437 "write": true, 00:18:53.437 "unmap": true, 00:18:53.437 "flush": false, 00:18:53.437 "reset": true, 00:18:53.437 "nvme_admin": false, 00:18:53.437 "nvme_io": false, 00:18:53.437 "nvme_io_md": false, 00:18:53.437 "write_zeroes": true, 00:18:53.437 "zcopy": false, 00:18:53.437 "get_zone_info": false, 00:18:53.437 "zone_management": false, 00:18:53.437 "zone_append": false, 00:18:53.437 "compare": false, 00:18:53.437 "compare_and_write": false, 00:18:53.437 "abort": false, 00:18:53.437 "seek_hole": true, 00:18:53.437 "seek_data": true, 00:18:53.437 "copy": false, 00:18:53.437 "nvme_iov_md": false 00:18:53.437 }, 00:18:53.437 "driver_specific": { 00:18:53.437 "lvol": { 00:18:53.437 "lvol_store_uuid": "3f3e11eb-d663-4ea0-9d64-a39c34dd8c67", 00:18:53.437 "base_bdev": "nvme0n1", 00:18:53.437 "thin_provision": true, 00:18:53.437 "num_allocated_clusters": 0, 00:18:53.437 "snapshot": false, 00:18:53.437 "clone": false, 00:18:53.437 "esnap_clone": false 00:18:53.437 } 00:18:53.437 } 00:18:53.437 } 00:18:53.437 ]' 00:18:53.437 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:53.437 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:53.437 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:53.698 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:53.698 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:53.698 08:40:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:53.698 08:40:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:53.698 08:40:15 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b0deede6-f487-4388-994a-df28283aeaf0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:53.698 [2024-11-19 08:40:15.514073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.514128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.698 [2024-11-19 08:40:15.514141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:53.698 [2024-11-19 08:40:15.514151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.516419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.516470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.698 [2024-11-19 08:40:15.516481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:18:53.698 [2024-11-19 08:40:15.516491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.516585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.698 [2024-11-19 08:40:15.516860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.698 [2024-11-19 08:40:15.516884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.516893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.698 [2024-11-19 08:40:15.516902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:53.698 [2024-11-19 08:40:15.516913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.517024] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:18:53.698 [2024-11-19 08:40:15.518402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.518425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:53.698 [2024-11-19 08:40:15.518436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:53.698 [2024-11-19 08:40:15.518443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.525760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.525792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.698 [2024-11-19 08:40:15.525806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.251 ms 00:18:53.698 [2024-11-19 08:40:15.525813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.525945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.525960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.698 [2024-11-19 08:40:15.525972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:53.698 [2024-11-19 08:40:15.525980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.526024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.526032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.698 [2024-11-19 08:40:15.526041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:53.698 [2024-11-19 08:40:15.526049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.526086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:53.698 [2024-11-19 08:40:15.527759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.527781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.698 [2024-11-19 08:40:15.527789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:18:53.698 [2024-11-19 08:40:15.527816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.527857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.527871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.698 [2024-11-19 08:40:15.527879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:53.698 [2024-11-19 08:40:15.527890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.527918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:53.698 [2024-11-19 08:40:15.528078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.698 [2024-11-19 08:40:15.528091] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.698 [2024-11-19 08:40:15.528102] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:53.698 [2024-11-19 08:40:15.528113] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.698 [2024-11-19 08:40:15.528123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.698 [2024-11-19 08:40:15.528130] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:53.698 [2024-11-19 08:40:15.528139] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.698 [2024-11-19 08:40:15.528146] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.698 [2024-11-19 08:40:15.528155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.698 [2024-11-19 08:40:15.528162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.528174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.698 [2024-11-19 08:40:15.528181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:53.698 [2024-11-19 08:40:15.528190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.528270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.698 [2024-11-19 08:40:15.528294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.698 [2024-11-19 08:40:15.528301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:53.698 [2024-11-19 08:40:15.528310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.698 [2024-11-19 08:40:15.528427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.698 [2024-11-19 08:40:15.528439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.698 [2024-11-19 08:40:15.528449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.698 [2024-11-19 08:40:15.528460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.698 [2024-11-19 08:40:15.528468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.698 [2024-11-19 08:40:15.528476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.698 [2024-11-19 08:40:15.528483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:53.698 [2024-11-19 08:40:15.528493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.698 [2024-11-19 08:40:15.528500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:53.698 [2024-11-19 08:40:15.528508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.698 [2024-11-19 08:40:15.528515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.699 [2024-11-19 08:40:15.528523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:53.699 [2024-11-19 08:40:15.528529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.699 [2024-11-19 08:40:15.528540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.699 [2024-11-19 08:40:15.528547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:53.699 [2024-11-19 08:40:15.528555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.699 [2024-11-19 08:40:15.528569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.699 [2024-11-19 08:40:15.528606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.699 [2024-11-19 08:40:15.528642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.699 [2024-11-19 08:40:15.528663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.699 [2024-11-19 08:40:15.528689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.699 [2024-11-19 08:40:15.528710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.699 [2024-11-19 08:40:15.528735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.699 [2024-11-19 08:40:15.528743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:53.699 [2024-11-19 08:40:15.528749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.699 [2024-11-19 08:40:15.528758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.699 [2024-11-19 08:40:15.528764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:53.699 [2024-11-19 08:40:15.528773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.699 [2024-11-19 08:40:15.528788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:53.699 [2024-11-19 08:40:15.528794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528801] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.699 [2024-11-19 08:40:15.528809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.699 [2024-11-19 08:40:15.528831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.699 [2024-11-19 08:40:15.528847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.699 [2024-11-19 08:40:15.528854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.699 [2024-11-19 08:40:15.528862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.699 [2024-11-19 08:40:15.528869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.699 [2024-11-19 08:40:15.528877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.699 [2024-11-19 08:40:15.528884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.699 [2024-11-19 08:40:15.528897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.699 [2024-11-19 08:40:15.528907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.528916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:53.699 [2024-11-19 08:40:15.528923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:53.699 [2024-11-19 08:40:15.528932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:53.699 [2024-11-19 08:40:15.528939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:53.699 [2024-11-19 08:40:15.528948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:53.699 [2024-11-19 08:40:15.528954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:53.699 [2024-11-19 08:40:15.528965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:53.699 [2024-11-19 08:40:15.528972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:53.699 [2024-11-19 08:40:15.528981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:53.699 [2024-11-19 08:40:15.528988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.528996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.529003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.529011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.529018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:53.699 [2024-11-19 08:40:15.529026] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.699 [2024-11-19 08:40:15.529033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.529045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.699 [2024-11-19 08:40:15.529052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.699 [2024-11-19 08:40:15.529061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.699 [2024-11-19 08:40:15.529068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.699 [2024-11-19 08:40:15.529078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.699 [2024-11-19 08:40:15.529098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.699 [2024-11-19 08:40:15.529110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:18:53.699 [2024-11-19 08:40:15.529117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.699 [2024-11-19 08:40:15.529199] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:53.699 [2024-11-19 08:40:15.529212] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:57.929 [2024-11-19 08:40:19.081507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.929 [2024-11-19 08:40:19.081568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:57.929 [2024-11-19 08:40:19.081585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3559.149 ms 00:18:57.929 [2024-11-19 08:40:19.081595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.929 [2024-11-19 08:40:19.092487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.929 [2024-11-19 08:40:19.092611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.929 [2024-11-19 08:40:19.092648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.806 ms 00:18:57.929 [2024-11-19 08:40:19.092657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.929 [2024-11-19 08:40:19.092821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.929 [2024-11-19 08:40:19.092834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:57.929 [2024-11-19 08:40:19.092844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:57.929 [2024-11-19 08:40:19.092855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.929 [2024-11-19 08:40:19.111514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.929 [2024-11-19 08:40:19.111556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.929 [2024-11-19 08:40:19.111570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.659 ms 00:18:57.929 [2024-11-19 08:40:19.111594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.929 [2024-11-19 08:40:19.111693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.929 [2024-11-19 08:40:19.111706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.929 [2024-11-19 08:40:19.111736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:57.929 [2024-11-19 08:40:19.111744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.112196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.112215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.930 [2024-11-19 08:40:19.112225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:18:57.930 [2024-11-19 08:40:19.112232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.112354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.112365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.930 [2024-11-19 08:40:19.112375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:57.930 [2024-11-19 08:40:19.112385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.119889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.120004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.930 [2024-11-19 08:40:19.120026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.469 ms 00:18:57.930 [2024-11-19 08:40:19.120036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.128085] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:57.930 [2024-11-19 08:40:19.144111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.144173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.930 [2024-11-19 08:40:19.144187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.990 ms 00:18:57.930 [2024-11-19 08:40:19.144196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.229518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.229587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:57.930 [2024-11-19 08:40:19.229601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.366 ms 00:18:57.930 [2024-11-19 08:40:19.229614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.229824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.229854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.930 [2024-11-19 08:40:19.229873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:18:57.930 [2024-11-19 08:40:19.229883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.233561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.233614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:57.930 [2024-11-19 08:40:19.233625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.649 ms 00:18:57.930 [2024-11-19 08:40:19.233634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.236465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.236498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:57.930 [2024-11-19 08:40:19.236508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:18:57.930 [2024-11-19 08:40:19.236516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.236846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.236864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.930 [2024-11-19 08:40:19.236874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:57.930 [2024-11-19 08:40:19.236885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.273543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.273602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:57.930 [2024-11-19 08:40:19.273618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.694 ms 00:18:57.930 [2024-11-19 08:40:19.273629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.278372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.278432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:57.930 [2024-11-19 08:40:19.278443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.638 ms 00:18:57.930 [2024-11-19 08:40:19.278453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.281830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.281868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:57.930 [2024-11-19 08:40:19.281878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:18:57.930 [2024-11-19 08:40:19.281887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.285437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.285514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.930 [2024-11-19 08:40:19.285528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:18:57.930 [2024-11-19 08:40:19.285539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.285604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.285633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.930 [2024-11-19 08:40:19.285642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.930 [2024-11-19 08:40:19.285652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.285760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.930 [2024-11-19 08:40:19.285772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.930 [2024-11-19 08:40:19.285780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:57.930 [2024-11-19 08:40:19.285801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.930 [2024-11-19 08:40:19.286885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:57.930 [2024-11-19 08:40:19.287816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3779.770 ms, result 0 00:18:57.930 [2024-11-19 08:40:19.288651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:18:57.930 "name": "ftl0", 00:18:57.930 "uuid": "d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e" 00:18:57.930 } 00:18:57.930 p_thread 00:18:57.930 08:40:19 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:57.930 [ 00:18:57.930 { 00:18:57.930 "name": "ftl0", 00:18:57.930 "aliases": [ 00:18:57.930 "d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e" 00:18:57.930 ], 00:18:57.930 "product_name": "FTL disk", 00:18:57.930 "block_size": 4096, 00:18:57.930 "num_blocks": 23592960, 00:18:57.930 "uuid": "d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e", 00:18:57.930 "assigned_rate_limits": { 00:18:57.930 "rw_ios_per_sec": 0, 00:18:57.930 "rw_mbytes_per_sec": 0, 00:18:57.930 "r_mbytes_per_sec": 0, 00:18:57.930 "w_mbytes_per_sec": 0 00:18:57.930 }, 00:18:57.930 "claimed": false, 00:18:57.930 "zoned": false, 00:18:57.930 "supported_io_types": { 00:18:57.930 "read": true, 00:18:57.930 "write": true, 00:18:57.930 "unmap": true, 00:18:57.930 "flush": true, 00:18:57.930 "reset": false, 00:18:57.930 "nvme_admin": false, 00:18:57.930 "nvme_io": false, 00:18:57.930 "nvme_io_md": false, 00:18:57.930 "write_zeroes": true, 00:18:57.930 "zcopy": false, 00:18:57.930 "get_zone_info": false, 00:18:57.930 "zone_management": false, 00:18:57.930 "zone_append": false, 00:18:57.930 "compare": false, 00:18:57.930 "compare_and_write": false, 00:18:57.930 "abort": false, 00:18:57.930 "seek_hole": false, 00:18:57.930 "seek_data": false, 00:18:57.930 "copy": false, 00:18:57.930 "nvme_iov_md": false 00:18:57.930 }, 00:18:57.930 "driver_specific": { 00:18:57.930 "ftl": { 00:18:57.930 "base_bdev": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:57.930 "cache": "nvc0n1p0" 00:18:57.930 } 00:18:57.930 } 00:18:57.930 } 00:18:57.930 ] 00:18:57.930 08:40:19 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:57.930 08:40:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:57.930 08:40:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:58.190 08:40:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:58.190 08:40:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:58.190 08:40:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:58.190 { 00:18:58.190 "name": "ftl0", 00:18:58.190 "aliases": [ 00:18:58.190 "d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e" 00:18:58.190 ], 00:18:58.190 "product_name": "FTL disk", 00:18:58.190 "block_size": 4096, 00:18:58.190 "num_blocks": 23592960, 00:18:58.190 "uuid": "d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e", 00:18:58.190 "assigned_rate_limits": { 00:18:58.190 "rw_ios_per_sec": 0, 00:18:58.190 "rw_mbytes_per_sec": 0, 00:18:58.190 "r_mbytes_per_sec": 0, 00:18:58.190 "w_mbytes_per_sec": 0 00:18:58.190 }, 00:18:58.190 "claimed": false, 00:18:58.190 "zoned": false, 00:18:58.190 "supported_io_types": { 00:18:58.190 "read": true, 00:18:58.190 "write": true, 00:18:58.190 "unmap": true, 00:18:58.190 "flush": true, 00:18:58.190 "reset": false, 00:18:58.190 "nvme_admin": false, 00:18:58.190 "nvme_io": false, 00:18:58.190 "nvme_io_md": false, 00:18:58.190 "write_zeroes": true, 00:18:58.190 "zcopy": false, 00:18:58.190 "get_zone_info": false, 00:18:58.190 "zone_management": false, 00:18:58.190 "zone_append": false, 00:18:58.190 "compare": false, 00:18:58.190 "compare_and_write": false, 00:18:58.190 "abort": false, 00:18:58.190 "seek_hole": false, 00:18:58.190 "seek_data": false, 00:18:58.190 "copy": false, 00:18:58.190 "nvme_iov_md": false 00:18:58.190 }, 00:18:58.190 "driver_specific": { 00:18:58.190 "ftl": { 00:18:58.190 "base_bdev": "b0deede6-f487-4388-994a-df28283aeaf0", 00:18:58.190 "cache": "nvc0n1p0" 00:18:58.190 } 00:18:58.190 } 00:18:58.190 } 00:18:58.190 ]' 00:18:58.190 08:40:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:58.453 08:40:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:58.453 08:40:20 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:58.453 [2024-11-19 08:40:20.298411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.298521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:58.453 [2024-11-19 08:40:20.298557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:58.453 [2024-11-19 08:40:20.298578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.298655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:58.453 [2024-11-19 08:40:20.299375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.299422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:58.453 [2024-11-19 08:40:20.299448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:18:58.453 [2024-11-19 08:40:20.299491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.300014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.300063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:58.453 [2024-11-19 08:40:20.300098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:18:58.453 [2024-11-19 08:40:20.300121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.302899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.302946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:58.453 [2024-11-19 08:40:20.302971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:18:58.453 [2024-11-19 08:40:20.302993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.308462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.308528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:58.453 [2024-11-19 08:40:20.308557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.407 ms 00:18:58.453 [2024-11-19 08:40:20.308581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.310365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.310440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:58.453 [2024-11-19 08:40:20.310467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:18:58.453 [2024-11-19 08:40:20.310488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.315322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.315401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:58.453 [2024-11-19 08:40:20.315431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:18:58.453 [2024-11-19 08:40:20.315454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.315640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.315673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:58.453 [2024-11-19 08:40:20.315706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:58.453 [2024-11-19 08:40:20.315744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.317703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.317785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:58.453 [2024-11-19 08:40:20.317812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:18:58.453 [2024-11-19 08:40:20.317836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.319394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.319471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:58.453 [2024-11-19 08:40:20.319519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:18:58.453 [2024-11-19 08:40:20.319542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.320841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.320910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:58.453 [2024-11-19 08:40:20.320943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:18:58.453 [2024-11-19 08:40:20.320966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.322163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.453 [2024-11-19 08:40:20.322227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:58.453 [2024-11-19 08:40:20.322258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:18:58.453 [2024-11-19 08:40:20.322280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.453 [2024-11-19 08:40:20.322340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:58.453 [2024-11-19 08:40:20.322377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.322983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:58.453 [2024-11-19 08:40:20.323605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.323988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:58.454 [2024-11-19 08:40:20.324146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:58.454 [2024-11-19 08:40:20.324153] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:18:58.454 [2024-11-19 08:40:20.324163] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:58.454 [2024-11-19 08:40:20.324170] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:58.454 [2024-11-19 08:40:20.324179] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:58.454 [2024-11-19 08:40:20.324202] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:58.454 [2024-11-19 08:40:20.324211] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:58.454 [2024-11-19 08:40:20.324219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:58.454 [2024-11-19 08:40:20.324228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:58.454 [2024-11-19 08:40:20.324234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:58.454 [2024-11-19 08:40:20.324242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:58.454 [2024-11-19 08:40:20.324250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.454 [2024-11-19 08:40:20.324259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:58.454 [2024-11-19 08:40:20.324267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:18:58.454 [2024-11-19 08:40:20.324278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.454 [2024-11-19 08:40:20.326166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.454 [2024-11-19 08:40:20.326220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:58.454 [2024-11-19 08:40:20.326251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:18:58.454 [2024-11-19 08:40:20.326273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.454 [2024-11-19 08:40:20.326415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.454 [2024-11-19 08:40:20.326449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:58.454 [2024-11-19 08:40:20.326477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:58.454 [2024-11-19 08:40:20.326499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.454 [2024-11-19 08:40:20.332961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.454 [2024-11-19 08:40:20.333045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.454 [2024-11-19 08:40:20.333097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.454 [2024-11-19 08:40:20.333123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.454 [2024-11-19 08:40:20.333245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.454 [2024-11-19 08:40:20.333283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.454 [2024-11-19 08:40:20.333312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.454 [2024-11-19 08:40:20.333336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.454 [2024-11-19 08:40:20.333436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.454 [2024-11-19 08:40:20.333476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.454 [2024-11-19 08:40:20.333506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.454 [2024-11-19 08:40:20.333540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.454 [2024-11-19 08:40:20.333606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.454 [2024-11-19 08:40:20.333638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.455 [2024-11-19 08:40:20.333666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.455 [2024-11-19 08:40:20.333688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.455 [2024-11-19 08:40:20.347551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.455 [2024-11-19 08:40:20.347677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.455 [2024-11-19 08:40:20.347706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.455 [2024-11-19 08:40:20.347758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.714 [2024-11-19 08:40:20.356852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.714 [2024-11-19 08:40:20.356969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.714 [2024-11-19 08:40:20.357015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.714 [2024-11-19 08:40:20.357040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.714 [2024-11-19 08:40:20.357120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.714 [2024-11-19 08:40:20.357169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.714 [2024-11-19 08:40:20.357203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.714 [2024-11-19 08:40:20.357237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.714 [2024-11-19 08:40:20.357305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.714 [2024-11-19 08:40:20.357337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.714 [2024-11-19 08:40:20.357364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.714 [2024-11-19 08:40:20.357386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.714 [2024-11-19 08:40:20.357492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.714 [2024-11-19 08:40:20.357534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.714 [2024-11-19 08:40:20.357563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.714 [2024-11-19 08:40:20.357595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.714 [2024-11-19 08:40:20.357677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.714 [2024-11-19 08:40:20.357715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:58.715 [2024-11-19 08:40:20.357751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.715 [2024-11-19 08:40:20.357795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.715 [2024-11-19 08:40:20.357887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.715 [2024-11-19 08:40:20.357922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.715 [2024-11-19 08:40:20.357951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.715 [2024-11-19 08:40:20.357973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.715 [2024-11-19 08:40:20.358043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.715 [2024-11-19 08:40:20.358077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.715 [2024-11-19 08:40:20.358105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.715 [2024-11-19 08:40:20.358127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.715 [2024-11-19 08:40:20.358321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.002 ms, result 0 00:18:58.715 true 00:18:58.715 08:40:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86250 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 86250 ']' 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 86250 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86250 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.715 killing process with pid 86250 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86250' 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 86250 00:18:58.715 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 86250 00:19:03.997 08:40:25 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:03.998 65536+0 records in 00:19:03.998 65536+0 records out 00:19:03.998 268435456 bytes (268 MB, 256 MiB) copied, 0.831918 s, 323 MB/s 00:19:03.998 08:40:25 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:04.258 [2024-11-19 08:40:25.953694] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:04.258 [2024-11-19 08:40:25.953829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86445 ] 00:19:04.258 [2024-11-19 08:40:26.112624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.258 [2024-11-19 08:40:26.139374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.519 [2024-11-19 08:40:26.241223] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.519 [2024-11-19 08:40:26.241289] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.519 [2024-11-19 08:40:26.395064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.519 [2024-11-19 08:40:26.395195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.520 [2024-11-19 08:40:26.395210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.520 [2024-11-19 08:40:26.395233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.397246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.397283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.520 [2024-11-19 08:40:26.397292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:19:04.520 [2024-11-19 08:40:26.397323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.397391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.520 [2024-11-19 08:40:26.397599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.520 [2024-11-19 08:40:26.397615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.397625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.520 [2024-11-19 08:40:26.397633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:19:04.520 [2024-11-19 08:40:26.397639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.399073] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.520 [2024-11-19 08:40:26.401583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.401664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.520 [2024-11-19 08:40:26.401681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:19:04.520 [2024-11-19 08:40:26.401689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.401763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.401774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.520 [2024-11-19 08:40:26.401783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:04.520 [2024-11-19 08:40:26.401790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.408406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.408470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.520 [2024-11-19 08:40:26.408482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.596 ms 00:19:04.520 [2024-11-19 08:40:26.408489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.408605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.408626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.520 [2024-11-19 08:40:26.408650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:04.520 [2024-11-19 08:40:26.408657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.408698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.408707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.520 [2024-11-19 08:40:26.408714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:04.520 [2024-11-19 08:40:26.408728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.408767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:04.520 [2024-11-19 08:40:26.410342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.410370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.520 [2024-11-19 08:40:26.410378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:19:04.520 [2024-11-19 08:40:26.410385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.410426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.410437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.520 [2024-11-19 08:40:26.410445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:04.520 [2024-11-19 08:40:26.410451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.410476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.520 [2024-11-19 08:40:26.410500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:04.520 [2024-11-19 08:40:26.410535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.520 [2024-11-19 08:40:26.410552] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:04.520 [2024-11-19 08:40:26.410632] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.520 [2024-11-19 08:40:26.410647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.520 [2024-11-19 08:40:26.410656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.520 [2024-11-19 08:40:26.410665] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.520 [2024-11-19 08:40:26.410673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.520 [2024-11-19 08:40:26.410680] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:04.520 [2024-11-19 08:40:26.410686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.520 [2024-11-19 08:40:26.410699] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.520 [2024-11-19 08:40:26.410706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.520 [2024-11-19 08:40:26.410729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.410755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.520 [2024-11-19 08:40:26.410763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:19:04.520 [2024-11-19 08:40:26.410776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.410849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.520 [2024-11-19 08:40:26.410857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.520 [2024-11-19 08:40:26.410864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:04.520 [2024-11-19 08:40:26.410870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.520 [2024-11-19 08:40:26.410951] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.520 [2024-11-19 08:40:26.410961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.520 [2024-11-19 08:40:26.410973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.520 [2024-11-19 08:40:26.410979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.520 [2024-11-19 08:40:26.410986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.520 [2024-11-19 08:40:26.410993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.520 [2024-11-19 08:40:26.410999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:04.520 [2024-11-19 08:40:26.411008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.520 [2024-11-19 08:40:26.411015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:04.520 [2024-11-19 08:40:26.411021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.520 [2024-11-19 08:40:26.411028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.520 [2024-11-19 08:40:26.411035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:04.520 [2024-11-19 08:40:26.411041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.520 [2024-11-19 08:40:26.411047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.520 [2024-11-19 08:40:26.411054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:04.520 [2024-11-19 08:40:26.411060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.520 [2024-11-19 08:40:26.411066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.520 [2024-11-19 08:40:26.411072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:04.520 [2024-11-19 08:40:26.411078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.520 [2024-11-19 08:40:26.411084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.520 [2024-11-19 08:40:26.411090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:04.520 [2024-11-19 08:40:26.411096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.520 [2024-11-19 08:40:26.411102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.520 [2024-11-19 08:40:26.411113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.521 [2024-11-19 08:40:26.411125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.521 [2024-11-19 08:40:26.411131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.521 [2024-11-19 08:40:26.411142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.521 [2024-11-19 08:40:26.411148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.521 [2024-11-19 08:40:26.411160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.521 [2024-11-19 08:40:26.411166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.521 [2024-11-19 08:40:26.411177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.521 [2024-11-19 08:40:26.411183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:04.521 [2024-11-19 08:40:26.411189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.521 [2024-11-19 08:40:26.411195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.521 [2024-11-19 08:40:26.411201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:04.521 [2024-11-19 08:40:26.411208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.521 [2024-11-19 08:40:26.411221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:04.521 [2024-11-19 08:40:26.411227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411233] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.521 [2024-11-19 08:40:26.411240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.521 [2024-11-19 08:40:26.411246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.521 [2024-11-19 08:40:26.411253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.521 [2024-11-19 08:40:26.411259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.521 [2024-11-19 08:40:26.411265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.521 [2024-11-19 08:40:26.411271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.521 [2024-11-19 08:40:26.411278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.521 [2024-11-19 08:40:26.411284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.521 [2024-11-19 08:40:26.411291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.521 [2024-11-19 08:40:26.411299] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.521 [2024-11-19 08:40:26.411307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:04.521 [2024-11-19 08:40:26.411324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:04.521 [2024-11-19 08:40:26.411330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:04.521 [2024-11-19 08:40:26.411337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:04.521 [2024-11-19 08:40:26.411343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:04.521 [2024-11-19 08:40:26.411349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:04.521 [2024-11-19 08:40:26.411355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:04.521 [2024-11-19 08:40:26.411361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:04.521 [2024-11-19 08:40:26.411368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:04.521 [2024-11-19 08:40:26.411383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:04.521 [2024-11-19 08:40:26.411416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.521 [2024-11-19 08:40:26.411423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.521 [2024-11-19 08:40:26.411443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.521 [2024-11-19 08:40:26.411449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.521 [2024-11-19 08:40:26.411457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.521 [2024-11-19 08:40:26.411465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.521 [2024-11-19 08:40:26.411479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.521 [2024-11-19 08:40:26.411487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:19:04.521 [2024-11-19 08:40:26.411494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.782 [2024-11-19 08:40:26.423458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.782 [2024-11-19 08:40:26.423543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.782 [2024-11-19 08:40:26.423557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.925 ms 00:19:04.782 [2024-11-19 08:40:26.423581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.782 [2024-11-19 08:40:26.423696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.782 [2024-11-19 08:40:26.423706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.782 [2024-11-19 08:40:26.423718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:04.782 [2024-11-19 08:40:26.423725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.782 [2024-11-19 08:40:26.450990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.782 [2024-11-19 08:40:26.451177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.782 [2024-11-19 08:40:26.451212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.251 ms 00:19:04.782 [2024-11-19 08:40:26.451232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.451370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.451401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.783 [2024-11-19 08:40:26.451420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.783 [2024-11-19 08:40:26.451439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.452049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.452075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.783 [2024-11-19 08:40:26.452119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:19:04.783 [2024-11-19 08:40:26.452137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.452406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.452457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.783 [2024-11-19 08:40:26.452502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:19:04.783 [2024-11-19 08:40:26.452520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.462547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.462663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.783 [2024-11-19 08:40:26.462699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.998 ms 00:19:04.783 [2024-11-19 08:40:26.462737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.465966] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:04.783 [2024-11-19 08:40:26.466022] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.783 [2024-11-19 08:40:26.466041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.466054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.783 [2024-11-19 08:40:26.466067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:19:04.783 [2024-11-19 08:40:26.466078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.482176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.482209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.783 [2024-11-19 08:40:26.482220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.066 ms 00:19:04.783 [2024-11-19 08:40:26.482238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.484013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.484043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.783 [2024-11-19 08:40:26.484052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:19:04.783 [2024-11-19 08:40:26.484059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.485543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.485624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.783 [2024-11-19 08:40:26.485637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:19:04.783 [2024-11-19 08:40:26.485643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.485925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.485941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.783 [2024-11-19 08:40:26.485949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:19:04.783 [2024-11-19 08:40:26.485971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.505484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.505559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.783 [2024-11-19 08:40:26.505573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.524 ms 00:19:04.783 [2024-11-19 08:40:26.505589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.511523] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:04.783 [2024-11-19 08:40:26.527627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.527786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.783 [2024-11-19 08:40:26.527803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.999 ms 00:19:04.783 [2024-11-19 08:40:26.527812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.527932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.527944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.783 [2024-11-19 08:40:26.527954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:04.783 [2024-11-19 08:40:26.527961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.528019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.528037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.783 [2024-11-19 08:40:26.528045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:04.783 [2024-11-19 08:40:26.528052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.528074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.528082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.783 [2024-11-19 08:40:26.528089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.783 [2024-11-19 08:40:26.528097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.528129] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.783 [2024-11-19 08:40:26.528138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.528147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.783 [2024-11-19 08:40:26.528154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:04.783 [2024-11-19 08:40:26.528161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.531865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.531896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.783 [2024-11-19 08:40:26.531907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:19:04.783 [2024-11-19 08:40:26.531914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.532001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.783 [2024-11-19 08:40:26.532014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.783 [2024-11-19 08:40:26.532022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:04.783 [2024-11-19 08:40:26.532028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.783 [2024-11-19 08:40:26.532920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.783 [2024-11-19 08:40:26.533841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.812 ms, result 0 00:19:04.783 [2024-11-19 08:40:26.534515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.783 [2024-11-19 08:40:26.544342] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.725  [2024-11-19T08:40:28.573Z] Copying: 26/256 [MB] (26 MBps) [2024-11-19T08:40:29.957Z] Copying: 52/256 [MB] (26 MBps) [2024-11-19T08:40:30.898Z] Copying: 80/256 [MB] (27 MBps) [2024-11-19T08:40:31.839Z] Copying: 107/256 [MB] (27 MBps) [2024-11-19T08:40:32.778Z] Copying: 134/256 [MB] (26 MBps) [2024-11-19T08:40:33.718Z] Copying: 160/256 [MB] (26 MBps) [2024-11-19T08:40:34.661Z] Copying: 188/256 [MB] (27 MBps) [2024-11-19T08:40:35.600Z] Copying: 216/256 [MB] (27 MBps) [2024-11-19T08:40:36.172Z] Copying: 243/256 [MB] (27 MBps) [2024-11-19T08:40:36.172Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-19 08:40:36.015087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.265 [2024-11-19 08:40:36.016487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.016550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.265 [2024-11-19 08:40:36.016585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.265 [2024-11-19 08:40:36.016605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.016644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:14.265 [2024-11-19 08:40:36.017342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.017397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.265 [2024-11-19 08:40:36.017421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:19:14.265 [2024-11-19 08:40:36.017440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.019279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.019362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.265 [2024-11-19 08:40:36.019391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:19:14.265 [2024-11-19 08:40:36.019410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.025655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.025733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.265 [2024-11-19 08:40:36.025763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.220 ms 00:19:14.265 [2024-11-19 08:40:36.025784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.030924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.030984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.265 [2024-11-19 08:40:36.031009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:19:14.265 [2024-11-19 08:40:36.031027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.032501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.032571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.265 [2024-11-19 08:40:36.032597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:19:14.265 [2024-11-19 08:40:36.032616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.036601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.036690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.265 [2024-11-19 08:40:36.036741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 00:19:14.265 [2024-11-19 08:40:36.036786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.036907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.036952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.265 [2024-11-19 08:40:36.036985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:14.265 [2024-11-19 08:40:36.037004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.039299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.039361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.265 [2024-11-19 08:40:36.039372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:19:14.265 [2024-11-19 08:40:36.039378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.040919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.040950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.265 [2024-11-19 08:40:36.040958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:19:14.265 [2024-11-19 08:40:36.040964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.042117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.042185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.265 [2024-11-19 08:40:36.042196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:19:14.265 [2024-11-19 08:40:36.042203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.043326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.265 [2024-11-19 08:40:36.043359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.265 [2024-11-19 08:40:36.043368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:19:14.265 [2024-11-19 08:40:36.043374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.265 [2024-11-19 08:40:36.043398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.265 [2024-11-19 08:40:36.043417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.265 [2024-11-19 08:40:36.043548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.043999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.266 [2024-11-19 08:40:36.044141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.266 [2024-11-19 08:40:36.044147] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:19:14.266 [2024-11-19 08:40:36.044154] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.266 [2024-11-19 08:40:36.044160] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.266 [2024-11-19 08:40:36.044167] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.266 [2024-11-19 08:40:36.044174] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.266 [2024-11-19 08:40:36.044180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.266 [2024-11-19 08:40:36.044187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.266 [2024-11-19 08:40:36.044194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.266 [2024-11-19 08:40:36.044199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.267 [2024-11-19 08:40:36.044205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.267 [2024-11-19 08:40:36.044212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.267 [2024-11-19 08:40:36.044219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.267 [2024-11-19 08:40:36.044229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:19:14.267 [2024-11-19 08:40:36.044236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.045942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.267 [2024-11-19 08:40:36.045962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.267 [2024-11-19 08:40:36.045970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:19:14.267 [2024-11-19 08:40:36.045976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.046076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.267 [2024-11-19 08:40:36.046088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.267 [2024-11-19 08:40:36.046095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:14.267 [2024-11-19 08:40:36.046108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.052114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.052189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.267 [2024-11-19 08:40:36.052201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.052208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.052274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.052288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.267 [2024-11-19 08:40:36.052295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.052302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.052349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.052358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.267 [2024-11-19 08:40:36.052366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.052373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.052388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.052396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.267 [2024-11-19 08:40:36.052406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.052413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.065912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.065966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.267 [2024-11-19 08:40:36.065976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.065984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.267 [2024-11-19 08:40:36.074287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.267 [2024-11-19 08:40:36.074362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.267 [2024-11-19 08:40:36.074418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.267 [2024-11-19 08:40:36.074512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.267 [2024-11-19 08:40:36.074584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.267 [2024-11-19 08:40:36.074654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.267 [2024-11-19 08:40:36.074708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.267 [2024-11-19 08:40:36.074715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.267 [2024-11-19 08:40:36.074743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.267 [2024-11-19 08:40:36.074914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.485 ms, result 0 00:19:14.837 00:19:14.837 00:19:14.837 08:40:36 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86558 00:19:14.837 08:40:36 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:14.837 08:40:36 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86558 00:19:14.837 08:40:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 86558 ']' 00:19:14.837 08:40:36 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.837 08:40:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.837 08:40:36 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.837 08:40:36 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.837 08:40:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:14.837 [2024-11-19 08:40:36.683867] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:14.837 [2024-11-19 08:40:36.684095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86558 ] 00:19:15.097 [2024-11-19 08:40:36.839905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.097 [2024-11-19 08:40:36.864895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.666 08:40:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.666 08:40:37 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:15.666 08:40:37 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:15.926 [2024-11-19 08:40:37.673351] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.926 [2024-11-19 08:40:37.673492] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:16.187 [2024-11-19 08:40:37.843425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.187 [2024-11-19 08:40:37.843554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:16.187 [2024-11-19 08:40:37.843587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.187 [2024-11-19 08:40:37.843609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.187 [2024-11-19 08:40:37.845580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.187 [2024-11-19 08:40:37.845658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.187 [2024-11-19 08:40:37.845686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:19:16.187 [2024-11-19 08:40:37.845723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.187 [2024-11-19 08:40:37.845803] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:16.187 [2024-11-19 08:40:37.846032] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:16.187 [2024-11-19 08:40:37.846095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.187 [2024-11-19 08:40:37.846137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.187 [2024-11-19 08:40:37.846159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:19:16.187 [2024-11-19 08:40:37.846209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.187 [2024-11-19 08:40:37.847638] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:16.187 [2024-11-19 08:40:37.850040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.187 [2024-11-19 08:40:37.850107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:16.187 [2024-11-19 08:40:37.850137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:19:16.187 [2024-11-19 08:40:37.850165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.187 [2024-11-19 08:40:37.850236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.187 [2024-11-19 08:40:37.850273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:16.187 [2024-11-19 08:40:37.850300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:16.187 [2024-11-19 08:40:37.850345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.187 [2024-11-19 08:40:37.857096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.857155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.188 [2024-11-19 08:40:37.857183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.642 ms 00:19:16.188 [2024-11-19 08:40:37.857202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.857333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.857365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.188 [2024-11-19 08:40:37.857404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:16.188 [2024-11-19 08:40:37.857439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.857498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.857546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:16.188 [2024-11-19 08:40:37.857578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:16.188 [2024-11-19 08:40:37.857604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.857660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:16.188 [2024-11-19 08:40:37.859324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.859386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.188 [2024-11-19 08:40:37.859417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:19:16.188 [2024-11-19 08:40:37.859442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.859514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.859548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:16.188 [2024-11-19 08:40:37.859575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:16.188 [2024-11-19 08:40:37.859596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.859631] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:16.188 [2024-11-19 08:40:37.859704] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:16.188 [2024-11-19 08:40:37.859786] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:16.188 [2024-11-19 08:40:37.859820] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:16.188 [2024-11-19 08:40:37.859904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:16.188 [2024-11-19 08:40:37.859916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:16.188 [2024-11-19 08:40:37.859933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:16.188 [2024-11-19 08:40:37.859947] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:16.188 [2024-11-19 08:40:37.859955] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:16.188 [2024-11-19 08:40:37.859973] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:16.188 [2024-11-19 08:40:37.859987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:16.188 [2024-11-19 08:40:37.859996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:16.188 [2024-11-19 08:40:37.860003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:16.188 [2024-11-19 08:40:37.860015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.860022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:16.188 [2024-11-19 08:40:37.860031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:19:16.188 [2024-11-19 08:40:37.860038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.860120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.188 [2024-11-19 08:40:37.860129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:16.188 [2024-11-19 08:40:37.860144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:16.188 [2024-11-19 08:40:37.860151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.188 [2024-11-19 08:40:37.860236] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:16.188 [2024-11-19 08:40:37.860247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:16.188 [2024-11-19 08:40:37.860265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:16.188 [2024-11-19 08:40:37.860297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:16.188 [2024-11-19 08:40:37.860320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:16.188 [2024-11-19 08:40:37.860334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:16.188 [2024-11-19 08:40:37.860341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:16.188 [2024-11-19 08:40:37.860348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:16.188 [2024-11-19 08:40:37.860355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:16.188 [2024-11-19 08:40:37.860364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:16.188 [2024-11-19 08:40:37.860370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:16.188 [2024-11-19 08:40:37.860384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:16.188 [2024-11-19 08:40:37.860408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:16.188 [2024-11-19 08:40:37.860428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:16.188 [2024-11-19 08:40:37.860451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:16.188 [2024-11-19 08:40:37.860471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:16.188 [2024-11-19 08:40:37.860492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:16.188 [2024-11-19 08:40:37.860505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:16.188 [2024-11-19 08:40:37.860512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:16.188 [2024-11-19 08:40:37.860520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:16.188 [2024-11-19 08:40:37.860526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:16.188 [2024-11-19 08:40:37.860534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:16.188 [2024-11-19 08:40:37.860540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:16.188 [2024-11-19 08:40:37.860554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:16.188 [2024-11-19 08:40:37.860561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860567] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:16.188 [2024-11-19 08:40:37.860576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:16.188 [2024-11-19 08:40:37.860583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:16.188 [2024-11-19 08:40:37.860591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.188 [2024-11-19 08:40:37.860598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:16.188 [2024-11-19 08:40:37.860606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:16.188 [2024-11-19 08:40:37.860612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:16.188 [2024-11-19 08:40:37.860621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:16.188 [2024-11-19 08:40:37.860627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:16.189 [2024-11-19 08:40:37.860651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:16.189 [2024-11-19 08:40:37.860659] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:16.189 [2024-11-19 08:40:37.860671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:16.189 [2024-11-19 08:40:37.860694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:16.189 [2024-11-19 08:40:37.860701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:16.189 [2024-11-19 08:40:37.860709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:16.189 [2024-11-19 08:40:37.860727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:16.189 [2024-11-19 08:40:37.860736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:16.189 [2024-11-19 08:40:37.860743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:16.189 [2024-11-19 08:40:37.860751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:16.189 [2024-11-19 08:40:37.860758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:16.189 [2024-11-19 08:40:37.860766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:16.189 [2024-11-19 08:40:37.860814] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:16.189 [2024-11-19 08:40:37.860825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:16.189 [2024-11-19 08:40:37.860843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:16.189 [2024-11-19 08:40:37.860850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:16.189 [2024-11-19 08:40:37.860858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:16.189 [2024-11-19 08:40:37.860866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.860877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:16.189 [2024-11-19 08:40:37.860884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:19:16.189 [2024-11-19 08:40:37.860892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.872900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.872978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.189 [2024-11-19 08:40:37.873007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.974 ms 00:19:16.189 [2024-11-19 08:40:37.873028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.873168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.873229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:16.189 [2024-11-19 08:40:37.873267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:16.189 [2024-11-19 08:40:37.873288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.883620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.883698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.189 [2024-11-19 08:40:37.883767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.280 ms 00:19:16.189 [2024-11-19 08:40:37.883791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.883878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.883905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.189 [2024-11-19 08:40:37.883961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.189 [2024-11-19 08:40:37.883994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.884454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.884498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.189 [2024-11-19 08:40:37.884527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:19:16.189 [2024-11-19 08:40:37.884549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.884687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.884749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.189 [2024-11-19 08:40:37.884781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:16.189 [2024-11-19 08:40:37.884812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.891678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.891771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.189 [2024-11-19 08:40:37.891821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.835 ms 00:19:16.189 [2024-11-19 08:40:37.891844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.894433] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:16.189 [2024-11-19 08:40:37.894516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:16.189 [2024-11-19 08:40:37.894553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.894574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:16.189 [2024-11-19 08:40:37.894593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:19:16.189 [2024-11-19 08:40:37.894613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.907120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.907210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:16.189 [2024-11-19 08:40:37.907240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.458 ms 00:19:16.189 [2024-11-19 08:40:37.907263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.909060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.909128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:16.189 [2024-11-19 08:40:37.909177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:19:16.189 [2024-11-19 08:40:37.909199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.910717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.910814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:16.189 [2024-11-19 08:40:37.910846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:19:16.189 [2024-11-19 08:40:37.910870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.911182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.911250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:16.189 [2024-11-19 08:40:37.911281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:19:16.189 [2024-11-19 08:40:37.911302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.189 [2024-11-19 08:40:37.942972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.189 [2024-11-19 08:40:37.943136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:16.189 [2024-11-19 08:40:37.943173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.626 ms 00:19:16.189 [2024-11-19 08:40:37.943201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.950171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:16.190 [2024-11-19 08:40:37.965805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.965945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:16.190 [2024-11-19 08:40:37.965979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.501 ms 00:19:16.190 [2024-11-19 08:40:37.965998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.966114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.966138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:16.190 [2024-11-19 08:40:37.966189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:16.190 [2024-11-19 08:40:37.966220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.966303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.966338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:16.190 [2024-11-19 08:40:37.966367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:16.190 [2024-11-19 08:40:37.966392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.966439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.966481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:16.190 [2024-11-19 08:40:37.966515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:16.190 [2024-11-19 08:40:37.966543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.966592] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:16.190 [2024-11-19 08:40:37.966629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.966656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:16.190 [2024-11-19 08:40:37.966684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:16.190 [2024-11-19 08:40:37.966731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.970541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.970616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:16.190 [2024-11-19 08:40:37.970664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:19:16.190 [2024-11-19 08:40:37.970686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.970807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.190 [2024-11-19 08:40:37.970857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:16.190 [2024-11-19 08:40:37.970888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:16.190 [2024-11-19 08:40:37.970909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.190 [2024-11-19 08:40:37.971911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:16.190 [2024-11-19 08:40:37.972905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.395 ms, result 0 00:19:16.190 [2024-11-19 08:40:37.974006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.190 Some configs were skipped because the RPC state that can call them passed over. 00:19:16.190 08:40:38 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:16.450 [2024-11-19 08:40:38.200107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.450 [2024-11-19 08:40:38.200213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:16.450 [2024-11-19 08:40:38.200253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:19:16.450 [2024-11-19 08:40:38.200279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.450 [2024-11-19 08:40:38.200338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.703 ms, result 0 00:19:16.450 true 00:19:16.450 08:40:38 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:16.711 [2024-11-19 08:40:38.399452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.399556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:16.711 [2024-11-19 08:40:38.399591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:19:16.711 [2024-11-19 08:40:38.399617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.711 [2024-11-19 08:40:38.399671] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.276 ms, result 0 00:19:16.711 true 00:19:16.711 08:40:38 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86558 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 86558 ']' 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 86558 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86558 00:19:16.711 killing process with pid 86558 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86558' 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 86558 00:19:16.711 08:40:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 86558 00:19:16.711 [2024-11-19 08:40:38.601126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.601269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.711 [2024-11-19 08:40:38.601302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:16.711 [2024-11-19 08:40:38.601322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.711 [2024-11-19 08:40:38.601363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:16.711 [2024-11-19 08:40:38.602040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.602078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.711 [2024-11-19 08:40:38.602101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:19:16.711 [2024-11-19 08:40:38.602123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.711 [2024-11-19 08:40:38.602370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.602409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.711 [2024-11-19 08:40:38.602437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:19:16.711 [2024-11-19 08:40:38.602457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.711 [2024-11-19 08:40:38.605701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.605789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.711 [2024-11-19 08:40:38.605825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:19:16.711 [2024-11-19 08:40:38.605847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.711 [2024-11-19 08:40:38.611071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.611141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.711 [2024-11-19 08:40:38.611167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.171 ms 00:19:16.711 [2024-11-19 08:40:38.611188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.711 [2024-11-19 08:40:38.612647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.711 [2024-11-19 08:40:38.612732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.711 [2024-11-19 08:40:38.612765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:19:16.711 [2024-11-19 08:40:38.612786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.973 [2024-11-19 08:40:38.616804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.973 [2024-11-19 08:40:38.616890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.973 [2024-11-19 08:40:38.616933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:19:16.973 [2024-11-19 08:40:38.616957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.973 [2024-11-19 08:40:38.617078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.973 [2024-11-19 08:40:38.617118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.973 [2024-11-19 08:40:38.617147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:16.973 [2024-11-19 08:40:38.617176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.973 [2024-11-19 08:40:38.619314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.974 [2024-11-19 08:40:38.619347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:16.974 [2024-11-19 08:40:38.619356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:19:16.974 [2024-11-19 08:40:38.619381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.974 [2024-11-19 08:40:38.620853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.974 [2024-11-19 08:40:38.620886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:16.974 [2024-11-19 08:40:38.620895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:19:16.974 [2024-11-19 08:40:38.620905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.974 [2024-11-19 08:40:38.622102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.974 [2024-11-19 08:40:38.622137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.974 [2024-11-19 08:40:38.622146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:19:16.974 [2024-11-19 08:40:38.622154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.974 [2024-11-19 08:40:38.623294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.974 [2024-11-19 08:40:38.623368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.974 [2024-11-19 08:40:38.623380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:19:16.974 [2024-11-19 08:40:38.623388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.974 [2024-11-19 08:40:38.623429] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.974 [2024-11-19 08:40:38.623457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.623992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.624000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.624008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.624016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.974 [2024-11-19 08:40:38.624023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.975 [2024-11-19 08:40:38.624334] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.975 [2024-11-19 08:40:38.624340] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:19:16.975 [2024-11-19 08:40:38.624349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.975 [2024-11-19 08:40:38.624356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.975 [2024-11-19 08:40:38.624367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.975 [2024-11-19 08:40:38.624376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.975 [2024-11-19 08:40:38.624384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.975 [2024-11-19 08:40:38.624391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.975 [2024-11-19 08:40:38.624402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.975 [2024-11-19 08:40:38.624408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.975 [2024-11-19 08:40:38.624416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.975 [2024-11-19 08:40:38.624422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.975 [2024-11-19 08:40:38.624432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.975 [2024-11-19 08:40:38.624440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:19:16.975 [2024-11-19 08:40:38.624451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.626136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.975 [2024-11-19 08:40:38.626162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:16.975 [2024-11-19 08:40:38.626176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:19:16.975 [2024-11-19 08:40:38.626185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.626293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.975 [2024-11-19 08:40:38.626303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:16.975 [2024-11-19 08:40:38.626310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:16.975 [2024-11-19 08:40:38.626319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.632568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.975 [2024-11-19 08:40:38.632626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.975 [2024-11-19 08:40:38.632663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.975 [2024-11-19 08:40:38.632684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.632778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.975 [2024-11-19 08:40:38.632805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.975 [2024-11-19 08:40:38.632825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.975 [2024-11-19 08:40:38.632862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.632961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.975 [2024-11-19 08:40:38.633002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.975 [2024-11-19 08:40:38.633031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.975 [2024-11-19 08:40:38.633058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.633089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.975 [2024-11-19 08:40:38.633136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.975 [2024-11-19 08:40:38.633167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.975 [2024-11-19 08:40:38.633197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.646647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.975 [2024-11-19 08:40:38.646776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.975 [2024-11-19 08:40:38.646821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.975 [2024-11-19 08:40:38.646843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.975 [2024-11-19 08:40:38.655020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.975 [2024-11-19 08:40:38.655130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.976 [2024-11-19 08:40:38.655158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.976 [2024-11-19 08:40:38.655299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.976 [2024-11-19 08:40:38.655310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.976 [2024-11-19 08:40:38.655359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.976 [2024-11-19 08:40:38.655366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.976 [2024-11-19 08:40:38.655457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.976 [2024-11-19 08:40:38.655466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.976 [2024-11-19 08:40:38.655525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:16.976 [2024-11-19 08:40:38.655533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.976 [2024-11-19 08:40:38.655591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.976 [2024-11-19 08:40:38.655598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.976 [2024-11-19 08:40:38.655659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.976 [2024-11-19 08:40:38.655666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.976 [2024-11-19 08:40:38.655674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.976 [2024-11-19 08:40:38.655861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.817 ms, result 0 00:19:17.237 08:40:38 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:17.237 08:40:38 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:17.237 [2024-11-19 08:40:38.978349] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:17.237 [2024-11-19 08:40:38.978466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86595 ] 00:19:17.237 [2024-11-19 08:40:39.135618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.501 [2024-11-19 08:40:39.164783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.501 [2024-11-19 08:40:39.267528] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.501 [2024-11-19 08:40:39.267607] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.764 [2024-11-19 08:40:39.421127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.764 [2024-11-19 08:40:39.421174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.764 [2024-11-19 08:40:39.421186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.764 [2024-11-19 08:40:39.421202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.764 [2024-11-19 08:40:39.423221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.764 [2024-11-19 08:40:39.423260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.764 [2024-11-19 08:40:39.423270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.999 ms 00:19:17.764 [2024-11-19 08:40:39.423299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.764 [2024-11-19 08:40:39.423386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.764 [2024-11-19 08:40:39.423588] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.764 [2024-11-19 08:40:39.423601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.423611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.765 [2024-11-19 08:40:39.423619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:19:17.765 [2024-11-19 08:40:39.423626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.425068] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.765 [2024-11-19 08:40:39.427603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.427639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.765 [2024-11-19 08:40:39.427651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:19:17.765 [2024-11-19 08:40:39.427659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.427729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.427740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.765 [2024-11-19 08:40:39.427749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:17.765 [2024-11-19 08:40:39.427766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.434394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.434424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.765 [2024-11-19 08:40:39.434432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.601 ms 00:19:17.765 [2024-11-19 08:40:39.434439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.434557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.434569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.765 [2024-11-19 08:40:39.434585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:17.765 [2024-11-19 08:40:39.434592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.434623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.434631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.765 [2024-11-19 08:40:39.434638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:17.765 [2024-11-19 08:40:39.434644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.434665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:17.765 [2024-11-19 08:40:39.436279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.436307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.765 [2024-11-19 08:40:39.436322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:19:17.765 [2024-11-19 08:40:39.436329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.436372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.436384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.765 [2024-11-19 08:40:39.436391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:17.765 [2024-11-19 08:40:39.436404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.436422] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:17.765 [2024-11-19 08:40:39.436440] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:17.765 [2024-11-19 08:40:39.436476] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:17.765 [2024-11-19 08:40:39.436496] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:17.765 [2024-11-19 08:40:39.436575] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:17.765 [2024-11-19 08:40:39.436585] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.765 [2024-11-19 08:40:39.436593] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:17.765 [2024-11-19 08:40:39.436603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.765 [2024-11-19 08:40:39.436610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.765 [2024-11-19 08:40:39.436618] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:17.765 [2024-11-19 08:40:39.436625] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.765 [2024-11-19 08:40:39.436631] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:17.765 [2024-11-19 08:40:39.436647] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:17.765 [2024-11-19 08:40:39.436656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.436671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.765 [2024-11-19 08:40:39.436678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:19:17.765 [2024-11-19 08:40:39.436684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.436782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.765 [2024-11-19 08:40:39.436799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.765 [2024-11-19 08:40:39.436814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:17.765 [2024-11-19 08:40:39.436830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.765 [2024-11-19 08:40:39.436913] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.765 [2024-11-19 08:40:39.436923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.765 [2024-11-19 08:40:39.436934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.765 [2024-11-19 08:40:39.436942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.436949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.765 [2024-11-19 08:40:39.436955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.436962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:17.765 [2024-11-19 08:40:39.436971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.765 [2024-11-19 08:40:39.436978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:17.765 [2024-11-19 08:40:39.436984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.765 [2024-11-19 08:40:39.436991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.765 [2024-11-19 08:40:39.436998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:17.765 [2024-11-19 08:40:39.437004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.765 [2024-11-19 08:40:39.437010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.765 [2024-11-19 08:40:39.437017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:17.765 [2024-11-19 08:40:39.437023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.765 [2024-11-19 08:40:39.437035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:17.765 [2024-11-19 08:40:39.437043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.765 [2024-11-19 08:40:39.437055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.765 [2024-11-19 08:40:39.437067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.765 [2024-11-19 08:40:39.437078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.765 [2024-11-19 08:40:39.437091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.765 [2024-11-19 08:40:39.437097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.765 [2024-11-19 08:40:39.437110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.765 [2024-11-19 08:40:39.437116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.765 [2024-11-19 08:40:39.437129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.765 [2024-11-19 08:40:39.437135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.765 [2024-11-19 08:40:39.437148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.765 [2024-11-19 08:40:39.437154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:17.765 [2024-11-19 08:40:39.437160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.765 [2024-11-19 08:40:39.437167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:17.765 [2024-11-19 08:40:39.437172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:17.765 [2024-11-19 08:40:39.437179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:17.765 [2024-11-19 08:40:39.437193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:17.765 [2024-11-19 08:40:39.437201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437207] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.765 [2024-11-19 08:40:39.437215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.765 [2024-11-19 08:40:39.437222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.765 [2024-11-19 08:40:39.437228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.765 [2024-11-19 08:40:39.437236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.765 [2024-11-19 08:40:39.437242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.765 [2024-11-19 08:40:39.437249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.765 [2024-11-19 08:40:39.437256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.766 [2024-11-19 08:40:39.437262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.766 [2024-11-19 08:40:39.437269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.766 [2024-11-19 08:40:39.437277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.766 [2024-11-19 08:40:39.437292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:17.766 [2024-11-19 08:40:39.437310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:17.766 [2024-11-19 08:40:39.437317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:17.766 [2024-11-19 08:40:39.437323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:17.766 [2024-11-19 08:40:39.437329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:17.766 [2024-11-19 08:40:39.437335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:17.766 [2024-11-19 08:40:39.437342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:17.766 [2024-11-19 08:40:39.437348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:17.766 [2024-11-19 08:40:39.437355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:17.766 [2024-11-19 08:40:39.437370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:17.766 [2024-11-19 08:40:39.437404] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.766 [2024-11-19 08:40:39.437419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.766 [2024-11-19 08:40:39.437439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.766 [2024-11-19 08:40:39.437446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.766 [2024-11-19 08:40:39.437454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.766 [2024-11-19 08:40:39.437461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.437468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.766 [2024-11-19 08:40:39.437476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:19:17.766 [2024-11-19 08:40:39.437483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.449220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.449258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.766 [2024-11-19 08:40:39.449269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.705 ms 00:19:17.766 [2024-11-19 08:40:39.449277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.449393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.449402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.766 [2024-11-19 08:40:39.449415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:17.766 [2024-11-19 08:40:39.449422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.480095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.480340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.766 [2024-11-19 08:40:39.480394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.705 ms 00:19:17.766 [2024-11-19 08:40:39.480422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.480671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.480756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.766 [2024-11-19 08:40:39.480819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:17.766 [2024-11-19 08:40:39.480845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.481603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.481666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.766 [2024-11-19 08:40:39.481695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:19:17.766 [2024-11-19 08:40:39.481777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.482140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.482179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.766 [2024-11-19 08:40:39.482216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:17.766 [2024-11-19 08:40:39.482264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.493596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.493661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.766 [2024-11-19 08:40:39.493704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.285 ms 00:19:17.766 [2024-11-19 08:40:39.493758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.497471] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:17.766 [2024-11-19 08:40:39.497522] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:17.766 [2024-11-19 08:40:39.497540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.497552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:17.766 [2024-11-19 08:40:39.497566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:19:17.766 [2024-11-19 08:40:39.497576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.514929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.514969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:17.766 [2024-11-19 08:40:39.514981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.321 ms 00:19:17.766 [2024-11-19 08:40:39.514989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.516968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.517004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:17.766 [2024-11-19 08:40:39.517014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:19:17.766 [2024-11-19 08:40:39.517022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.518651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.518749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:17.766 [2024-11-19 08:40:39.518762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:19:17.766 [2024-11-19 08:40:39.518769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.519060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.519076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:17.766 [2024-11-19 08:40:39.519100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:19:17.766 [2024-11-19 08:40:39.519107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.539148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.539223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:17.766 [2024-11-19 08:40:39.539237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.037 ms 00:19:17.766 [2024-11-19 08:40:39.539244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.545193] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:17.766 [2024-11-19 08:40:39.560842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.560890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:17.766 [2024-11-19 08:40:39.560902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.544 ms 00:19:17.766 [2024-11-19 08:40:39.560941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.561046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.561056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:17.766 [2024-11-19 08:40:39.561064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:17.766 [2024-11-19 08:40:39.561074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.561127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.561136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:17.766 [2024-11-19 08:40:39.561143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:17.766 [2024-11-19 08:40:39.561150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.561170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.561177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:17.766 [2024-11-19 08:40:39.561184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:17.766 [2024-11-19 08:40:39.561199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.766 [2024-11-19 08:40:39.561234] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:17.766 [2024-11-19 08:40:39.561244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.766 [2024-11-19 08:40:39.561250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:17.767 [2024-11-19 08:40:39.561258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:17.767 [2024-11-19 08:40:39.561265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.767 [2024-11-19 08:40:39.565012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.767 [2024-11-19 08:40:39.565123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:17.767 [2024-11-19 08:40:39.565137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:19:17.767 [2024-11-19 08:40:39.565145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.767 [2024-11-19 08:40:39.565247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.767 [2024-11-19 08:40:39.565261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:17.767 [2024-11-19 08:40:39.565269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:17.767 [2024-11-19 08:40:39.565276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.767 [2024-11-19 08:40:39.566150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.767 [2024-11-19 08:40:39.567083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.033 ms, result 0 00:19:17.767 [2024-11-19 08:40:39.567867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.767 [2024-11-19 08:40:39.577624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:18.708  [2024-11-19T08:40:41.998Z] Copying: 30/256 [MB] (30 MBps) [2024-11-19T08:40:42.937Z] Copying: 56/256 [MB] (26 MBps) [2024-11-19T08:40:43.878Z] Copying: 84/256 [MB] (27 MBps) [2024-11-19T08:40:44.819Z] Copying: 111/256 [MB] (27 MBps) [2024-11-19T08:40:45.760Z] Copying: 138/256 [MB] (27 MBps) [2024-11-19T08:40:46.701Z] Copying: 166/256 [MB] (27 MBps) [2024-11-19T08:40:47.643Z] Copying: 193/256 [MB] (27 MBps) [2024-11-19T08:40:48.587Z] Copying: 221/256 [MB] (27 MBps) [2024-11-19T08:40:48.847Z] Copying: 248/256 [MB] (27 MBps) [2024-11-19T08:40:48.847Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-19 08:40:48.821255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.940 [2024-11-19 08:40:48.822614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.940 [2024-11-19 08:40:48.822654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:26.940 [2024-11-19 08:40:48.822665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:26.940 [2024-11-19 08:40:48.822673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.940 [2024-11-19 08:40:48.822691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:26.940 [2024-11-19 08:40:48.823358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.940 [2024-11-19 08:40:48.823383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:26.940 [2024-11-19 08:40:48.823391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:19:26.940 [2024-11-19 08:40:48.823398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.940 [2024-11-19 08:40:48.823599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.940 [2024-11-19 08:40:48.823608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:26.940 [2024-11-19 08:40:48.823616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:19:26.940 [2024-11-19 08:40:48.823625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.940 [2024-11-19 08:40:48.826202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.940 [2024-11-19 08:40:48.826237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:26.940 [2024-11-19 08:40:48.826246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:19:26.940 [2024-11-19 08:40:48.826252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.831334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.831361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:26.941 [2024-11-19 08:40:48.831369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:19:26.941 [2024-11-19 08:40:48.831375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.832773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.832837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:26.941 [2024-11-19 08:40:48.832863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:19:26.941 [2024-11-19 08:40:48.832883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.836660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.836746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:26.941 [2024-11-19 08:40:48.836791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.728 ms 00:19:26.941 [2024-11-19 08:40:48.836811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.836941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.836965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:26.941 [2024-11-19 08:40:48.836985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:26.941 [2024-11-19 08:40:48.837057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.839320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.839380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:26.941 [2024-11-19 08:40:48.839405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:19:26.941 [2024-11-19 08:40:48.839424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.840871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.840931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:26.941 [2024-11-19 08:40:48.840955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:19:26.941 [2024-11-19 08:40:48.840974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.842083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.842144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:26.941 [2024-11-19 08:40:48.842155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:19:26.941 [2024-11-19 08:40:48.842161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.843370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.941 [2024-11-19 08:40:48.843425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:26.941 [2024-11-19 08:40:48.843451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:19:26.941 [2024-11-19 08:40:48.843470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.941 [2024-11-19 08:40:48.843519] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:26.941 [2024-11-19 08:40:48.843547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.843979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.844031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.844089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:26.941 [2024-11-19 08:40:48.844129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.203 [2024-11-19 08:40:48.844665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.204 [2024-11-19 08:40:48.844790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.204 [2024-11-19 08:40:48.844796] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:19:27.204 [2024-11-19 08:40:48.844804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.204 [2024-11-19 08:40:48.844818] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.204 [2024-11-19 08:40:48.844824] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.204 [2024-11-19 08:40:48.844831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.204 [2024-11-19 08:40:48.844838] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.204 [2024-11-19 08:40:48.844845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.204 [2024-11-19 08:40:48.844852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.204 [2024-11-19 08:40:48.844857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.204 [2024-11-19 08:40:48.844863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.204 [2024-11-19 08:40:48.844869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.204 [2024-11-19 08:40:48.844881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.204 [2024-11-19 08:40:48.844894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:19:27.204 [2024-11-19 08:40:48.844901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.846658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.204 [2024-11-19 08:40:48.846732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.204 [2024-11-19 08:40:48.846753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:19:27.204 [2024-11-19 08:40:48.846759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.846865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.204 [2024-11-19 08:40:48.846880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.204 [2024-11-19 08:40:48.846887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:27.204 [2024-11-19 08:40:48.846894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.852906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.852970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.204 [2024-11-19 08:40:48.852982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.852989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.853048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.853056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.204 [2024-11-19 08:40:48.853063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.853070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.853119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.853131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.204 [2024-11-19 08:40:48.853138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.853144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.853161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.853171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.204 [2024-11-19 08:40:48.853178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.853185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.866352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.866402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.204 [2024-11-19 08:40:48.866413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.866420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.874489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.874530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.204 [2024-11-19 08:40:48.874541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.874549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.874575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.874583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.204 [2024-11-19 08:40:48.874590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.874597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.874621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.874628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.204 [2024-11-19 08:40:48.874651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.874658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.874790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.874802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.204 [2024-11-19 08:40:48.874810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.874817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.874862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.874872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.204 [2024-11-19 08:40:48.874879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.874890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.874934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.874949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.204 [2024-11-19 08:40:48.874956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.874969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.875020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.204 [2024-11-19 08:40:48.875032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.204 [2024-11-19 08:40:48.875043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.204 [2024-11-19 08:40:48.875050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.204 [2024-11-19 08:40:48.875185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.639 ms, result 0 00:19:27.204 00:19:27.204 00:19:27.204 08:40:49 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:27.204 08:40:49 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:27.775 08:40:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:27.775 [2024-11-19 08:40:49.599029] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:27.775 [2024-11-19 08:40:49.599140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86713 ] 00:19:28.035 [2024-11-19 08:40:49.753553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.035 [2024-11-19 08:40:49.780499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.035 [2024-11-19 08:40:49.883135] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.035 [2024-11-19 08:40:49.883205] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.297 [2024-11-19 08:40:50.036996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.037044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:28.297 [2024-11-19 08:40:50.037056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.297 [2024-11-19 08:40:50.037064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.039042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.039082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.297 [2024-11-19 08:40:50.039091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:19:28.297 [2024-11-19 08:40:50.039106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.039184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:28.297 [2024-11-19 08:40:50.039394] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:28.297 [2024-11-19 08:40:50.039414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.039424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.297 [2024-11-19 08:40:50.039432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:19:28.297 [2024-11-19 08:40:50.039439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.040987] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:28.297 [2024-11-19 08:40:50.043596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.043633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:28.297 [2024-11-19 08:40:50.043646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:19:28.297 [2024-11-19 08:40:50.043653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.043709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.043730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:28.297 [2024-11-19 08:40:50.043738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:28.297 [2024-11-19 08:40:50.043745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.050437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.050466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.297 [2024-11-19 08:40:50.050475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:19:28.297 [2024-11-19 08:40:50.050482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.050598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.050611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.297 [2024-11-19 08:40:50.050619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:28.297 [2024-11-19 08:40:50.050626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.297 [2024-11-19 08:40:50.050658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.297 [2024-11-19 08:40:50.050666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.297 [2024-11-19 08:40:50.050674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:28.297 [2024-11-19 08:40:50.050689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.298 [2024-11-19 08:40:50.050734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:28.298 [2024-11-19 08:40:50.052334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.298 [2024-11-19 08:40:50.052423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.298 [2024-11-19 08:40:50.052436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:19:28.298 [2024-11-19 08:40:50.052451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.298 [2024-11-19 08:40:50.052498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.298 [2024-11-19 08:40:50.052510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.298 [2024-11-19 08:40:50.052518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:28.298 [2024-11-19 08:40:50.052525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.298 [2024-11-19 08:40:50.052542] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:28.298 [2024-11-19 08:40:50.052559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:28.298 [2024-11-19 08:40:50.052597] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:28.298 [2024-11-19 08:40:50.052616] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:28.298 [2024-11-19 08:40:50.052707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.298 [2024-11-19 08:40:50.052744] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.298 [2024-11-19 08:40:50.052754] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:28.298 [2024-11-19 08:40:50.052765] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.298 [2024-11-19 08:40:50.052774] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.298 [2024-11-19 08:40:50.052782] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:28.298 [2024-11-19 08:40:50.052789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.298 [2024-11-19 08:40:50.052797] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.298 [2024-11-19 08:40:50.052804] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.298 [2024-11-19 08:40:50.052814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.298 [2024-11-19 08:40:50.052825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.298 [2024-11-19 08:40:50.052834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:28.298 [2024-11-19 08:40:50.052840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.298 [2024-11-19 08:40:50.052920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.298 [2024-11-19 08:40:50.052930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.298 [2024-11-19 08:40:50.052937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:28.298 [2024-11-19 08:40:50.052944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.298 [2024-11-19 08:40:50.053023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.298 [2024-11-19 08:40:50.053034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.298 [2024-11-19 08:40:50.053045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.298 [2024-11-19 08:40:50.053067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.298 [2024-11-19 08:40:50.053092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.298 [2024-11-19 08:40:50.053105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.298 [2024-11-19 08:40:50.053111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:28.298 [2024-11-19 08:40:50.053118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.298 [2024-11-19 08:40:50.053124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.298 [2024-11-19 08:40:50.053130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:28.298 [2024-11-19 08:40:50.053135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.298 [2024-11-19 08:40:50.053149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.298 [2024-11-19 08:40:50.053166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.298 [2024-11-19 08:40:50.053189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.298 [2024-11-19 08:40:50.053207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.298 [2024-11-19 08:40:50.053223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.298 [2024-11-19 08:40:50.053243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.298 [2024-11-19 08:40:50.053254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.298 [2024-11-19 08:40:50.053260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:28.298 [2024-11-19 08:40:50.053265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.298 [2024-11-19 08:40:50.053271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.298 [2024-11-19 08:40:50.053277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:28.298 [2024-11-19 08:40:50.053285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.298 [2024-11-19 08:40:50.053297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:28.298 [2024-11-19 08:40:50.053303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053312] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.298 [2024-11-19 08:40:50.053327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.298 [2024-11-19 08:40:50.053334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.298 [2024-11-19 08:40:50.053347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.298 [2024-11-19 08:40:50.053353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.298 [2024-11-19 08:40:50.053359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.298 [2024-11-19 08:40:50.053365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.298 [2024-11-19 08:40:50.053371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.298 [2024-11-19 08:40:50.053377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.298 [2024-11-19 08:40:50.053384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.298 [2024-11-19 08:40:50.053392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.298 [2024-11-19 08:40:50.053404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:28.298 [2024-11-19 08:40:50.053411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:28.298 [2024-11-19 08:40:50.053417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:28.298 [2024-11-19 08:40:50.053423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:28.298 [2024-11-19 08:40:50.053429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:28.299 [2024-11-19 08:40:50.053436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:28.299 [2024-11-19 08:40:50.053442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:28.299 [2024-11-19 08:40:50.053448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:28.299 [2024-11-19 08:40:50.053455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:28.299 [2024-11-19 08:40:50.053469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:28.299 [2024-11-19 08:40:50.053475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:28.299 [2024-11-19 08:40:50.053481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:28.299 [2024-11-19 08:40:50.053487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:28.299 [2024-11-19 08:40:50.053493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:28.299 [2024-11-19 08:40:50.053500] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.299 [2024-11-19 08:40:50.053508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.299 [2024-11-19 08:40:50.053520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.299 [2024-11-19 08:40:50.053527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.299 [2024-11-19 08:40:50.053534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.299 [2024-11-19 08:40:50.053541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.299 [2024-11-19 08:40:50.053551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.053558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.299 [2024-11-19 08:40:50.053565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:19:28.299 [2024-11-19 08:40:50.053572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.065523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.065599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.299 [2024-11-19 08:40:50.065628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.924 ms 00:19:28.299 [2024-11-19 08:40:50.065663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.065799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.065828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:28.299 [2024-11-19 08:40:50.065864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:28.299 [2024-11-19 08:40:50.065892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.091643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.091903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.299 [2024-11-19 08:40:50.091991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.728 ms 00:19:28.299 [2024-11-19 08:40:50.092067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.092311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.092409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.299 [2024-11-19 08:40:50.092495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:28.299 [2024-11-19 08:40:50.092566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.093326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.093438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.299 [2024-11-19 08:40:50.093545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:19:28.299 [2024-11-19 08:40:50.093614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.093994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.094119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.299 [2024-11-19 08:40:50.094204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:19:28.299 [2024-11-19 08:40:50.094279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.104594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.104706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.299 [2024-11-19 08:40:50.104799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.226 ms 00:19:28.299 [2024-11-19 08:40:50.104852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.108153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:28.299 [2024-11-19 08:40:50.108274] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:28.299 [2024-11-19 08:40:50.108348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.108409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:28.299 [2024-11-19 08:40:50.108459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:19:28.299 [2024-11-19 08:40:50.108513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.124239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.124309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:28.299 [2024-11-19 08:40:50.124336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.652 ms 00:19:28.299 [2024-11-19 08:40:50.124354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.126106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.126172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:28.299 [2024-11-19 08:40:50.126198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:19:28.299 [2024-11-19 08:40:50.126217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.127735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.127822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:28.299 [2024-11-19 08:40:50.127850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:19:28.299 [2024-11-19 08:40:50.127870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.128144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.128207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:28.299 [2024-11-19 08:40:50.128236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:28.299 [2024-11-19 08:40:50.128277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.147710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.147865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:28.299 [2024-11-19 08:40:50.147894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.412 ms 00:19:28.299 [2024-11-19 08:40:50.147914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.153606] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:28.299 [2024-11-19 08:40:50.169425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.169538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:28.299 [2024-11-19 08:40:50.169567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.445 ms 00:19:28.299 [2024-11-19 08:40:50.169600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.169710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.169772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:28.299 [2024-11-19 08:40:50.169802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:28.299 [2024-11-19 08:40:50.169833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.299 [2024-11-19 08:40:50.169904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.299 [2024-11-19 08:40:50.169937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:28.299 [2024-11-19 08:40:50.169965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:28.300 [2024-11-19 08:40:50.169988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.300 [2024-11-19 08:40:50.170038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.300 [2024-11-19 08:40:50.170068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:28.300 [2024-11-19 08:40:50.170098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:28.300 [2024-11-19 08:40:50.170126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.300 [2024-11-19 08:40:50.170204] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:28.300 [2024-11-19 08:40:50.170238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.300 [2024-11-19 08:40:50.170257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:28.300 [2024-11-19 08:40:50.170289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:28.300 [2024-11-19 08:40:50.170307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.300 [2024-11-19 08:40:50.174191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.300 [2024-11-19 08:40:50.174263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:28.300 [2024-11-19 08:40:50.174308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:19:28.300 [2024-11-19 08:40:50.174328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.300 [2024-11-19 08:40:50.174425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.300 [2024-11-19 08:40:50.174464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:28.300 [2024-11-19 08:40:50.174491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:28.300 [2024-11-19 08:40:50.174520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.300 [2024-11-19 08:40:50.175441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.300 [2024-11-19 08:40:50.176417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.432 ms, result 0 00:19:28.300 [2024-11-19 08:40:50.177223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.300 [2024-11-19 08:40:50.186317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.562  [2024-11-19T08:40:50.469Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-19 08:40:50.358037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.562 [2024-11-19 08:40:50.358821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.358882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.562 [2024-11-19 08:40:50.358908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:28.562 [2024-11-19 08:40:50.358927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.358957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.562 [2024-11-19 08:40:50.359635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.359676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.562 [2024-11-19 08:40:50.359704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:19:28.562 [2024-11-19 08:40:50.359744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.361560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.361645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.562 [2024-11-19 08:40:50.361682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:19:28.562 [2024-11-19 08:40:50.361725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.364915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.364981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.562 [2024-11-19 08:40:50.365011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:19:28.562 [2024-11-19 08:40:50.365032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.370189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.370250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.562 [2024-11-19 08:40:50.370262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.099 ms 00:19:28.562 [2024-11-19 08:40:50.370269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.371710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.371753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.562 [2024-11-19 08:40:50.371762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:19:28.562 [2024-11-19 08:40:50.371769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.376125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.376211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.562 [2024-11-19 08:40:50.376224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:19:28.562 [2024-11-19 08:40:50.376232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.376331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.376342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.562 [2024-11-19 08:40:50.376350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:28.562 [2024-11-19 08:40:50.376357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.378839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.562 [2024-11-19 08:40:50.378871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.562 [2024-11-19 08:40:50.378879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.468 ms 00:19:28.562 [2024-11-19 08:40:50.378886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.562 [2024-11-19 08:40:50.380425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.563 [2024-11-19 08:40:50.380460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.563 [2024-11-19 08:40:50.380469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:19:28.563 [2024-11-19 08:40:50.380476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.563 [2024-11-19 08:40:50.381646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.563 [2024-11-19 08:40:50.381682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.563 [2024-11-19 08:40:50.381691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:19:28.563 [2024-11-19 08:40:50.381697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.563 [2024-11-19 08:40:50.382827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.563 [2024-11-19 08:40:50.382858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.563 [2024-11-19 08:40:50.382868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:19:28.563 [2024-11-19 08:40:50.382875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.563 [2024-11-19 08:40:50.382899] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.563 [2024-11-19 08:40:50.382914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.382999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.563 [2024-11-19 08:40:50.383476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.564 [2024-11-19 08:40:50.383631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.564 [2024-11-19 08:40:50.383639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:19:28.564 [2024-11-19 08:40:50.383647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.564 [2024-11-19 08:40:50.383661] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.564 [2024-11-19 08:40:50.383667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.564 [2024-11-19 08:40:50.383674] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.564 [2024-11-19 08:40:50.383681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.564 [2024-11-19 08:40:50.383688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.564 [2024-11-19 08:40:50.383694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.564 [2024-11-19 08:40:50.383702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.564 [2024-11-19 08:40:50.383710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.564 [2024-11-19 08:40:50.383717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-11-19 08:40:50.383758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.564 [2024-11-19 08:40:50.383768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:19:28.564 [2024-11-19 08:40:50.383774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.385444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-11-19 08:40:50.385465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.564 [2024-11-19 08:40:50.385473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:19:28.564 [2024-11-19 08:40:50.385480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.385586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-11-19 08:40:50.385594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.564 [2024-11-19 08:40:50.385601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:28.564 [2024-11-19 08:40:50.385607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.391736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.391777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.564 [2024-11-19 08:40:50.391786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.391794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.391861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.391872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.564 [2024-11-19 08:40:50.391880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.391888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.391926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.391938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.564 [2024-11-19 08:40:50.391945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.391952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.391971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.391984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.564 [2024-11-19 08:40:50.391990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.391997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.405269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.405316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.564 [2024-11-19 08:40:50.405326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.405335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.413360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.413478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.564 [2024-11-19 08:40:50.413493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.413501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.413526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.413534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.564 [2024-11-19 08:40:50.413541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.413560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.413586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.413593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.564 [2024-11-19 08:40:50.413604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.413611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.413679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.413689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.564 [2024-11-19 08:40:50.413696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.413703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.413872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.413910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.564 [2024-11-19 08:40:50.413946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.413972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.414026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.414058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.564 [2024-11-19 08:40:50.414087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.414106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.414172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.564 [2024-11-19 08:40:50.414206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.564 [2024-11-19 08:40:50.414243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.564 [2024-11-19 08:40:50.414268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-11-19 08:40:50.414413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.658 ms, result 0 00:19:28.829 00:19:28.829 00:19:28.829 08:40:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86727 00:19:28.829 08:40:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:28.829 08:40:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86727 00:19:28.829 08:40:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 86727 ']' 00:19:28.829 08:40:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.829 08:40:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.829 08:40:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.829 08:40:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.829 08:40:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:28.829 [2024-11-19 08:40:50.728445] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:28.829 [2024-11-19 08:40:50.728609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86727 ] 00:19:29.101 [2024-11-19 08:40:50.887411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.101 [2024-11-19 08:40:50.911996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.707 08:40:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.707 08:40:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:29.707 08:40:51 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:29.967 [2024-11-19 08:40:51.713318] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.967 [2024-11-19 08:40:51.713395] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.230 [2024-11-19 08:40:51.883513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.883561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.230 [2024-11-19 08:40:51.883573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.230 [2024-11-19 08:40:51.883591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.885576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.885615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.230 [2024-11-19 08:40:51.885642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:19:30.230 [2024-11-19 08:40:51.885650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.885727] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.230 [2024-11-19 08:40:51.885923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.230 [2024-11-19 08:40:51.885947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.885971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.230 [2024-11-19 08:40:51.885981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:30.230 [2024-11-19 08:40:51.885989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.887423] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.230 [2024-11-19 08:40:51.889907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.889941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.230 [2024-11-19 08:40:51.889968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:19:30.230 [2024-11-19 08:40:51.889976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.890036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.890047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.230 [2024-11-19 08:40:51.890068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:30.230 [2024-11-19 08:40:51.890076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.896778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.896803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.230 [2024-11-19 08:40:51.896830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.667 ms 00:19:30.230 [2024-11-19 08:40:51.896837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.896958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.896971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.230 [2024-11-19 08:40:51.896982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:30.230 [2024-11-19 08:40:51.896989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.897021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.897029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.230 [2024-11-19 08:40:51.897040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:30.230 [2024-11-19 08:40:51.897047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.897079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:30.230 [2024-11-19 08:40:51.898676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.898707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.230 [2024-11-19 08:40:51.898755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:19:30.230 [2024-11-19 08:40:51.898768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.898809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.898821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.230 [2024-11-19 08:40:51.898829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:30.230 [2024-11-19 08:40:51.898837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.898857] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.230 [2024-11-19 08:40:51.898884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.230 [2024-11-19 08:40:51.898916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.230 [2024-11-19 08:40:51.898940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:30.230 [2024-11-19 08:40:51.899023] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.230 [2024-11-19 08:40:51.899050] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.230 [2024-11-19 08:40:51.899060] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:30.230 [2024-11-19 08:40:51.899072] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.230 [2024-11-19 08:40:51.899082] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.230 [2024-11-19 08:40:51.899100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:30.230 [2024-11-19 08:40:51.899108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.230 [2024-11-19 08:40:51.899117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.230 [2024-11-19 08:40:51.899124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.230 [2024-11-19 08:40:51.899136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.899143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.230 [2024-11-19 08:40:51.899152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:30.230 [2024-11-19 08:40:51.899159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.899231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.230 [2024-11-19 08:40:51.899240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.230 [2024-11-19 08:40:51.899249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:30.230 [2024-11-19 08:40:51.899256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.230 [2024-11-19 08:40:51.899341] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.230 [2024-11-19 08:40:51.899355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.230 [2024-11-19 08:40:51.899372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.230 [2024-11-19 08:40:51.899381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.230 [2024-11-19 08:40:51.899394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.230 [2024-11-19 08:40:51.899400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.231 [2024-11-19 08:40:51.899424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.231 [2024-11-19 08:40:51.899439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.231 [2024-11-19 08:40:51.899446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:30.231 [2024-11-19 08:40:51.899454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.231 [2024-11-19 08:40:51.899461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.231 [2024-11-19 08:40:51.899469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:30.231 [2024-11-19 08:40:51.899475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.231 [2024-11-19 08:40:51.899489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.231 [2024-11-19 08:40:51.899514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.231 [2024-11-19 08:40:51.899534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.231 [2024-11-19 08:40:51.899556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.231 [2024-11-19 08:40:51.899576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.231 [2024-11-19 08:40:51.899599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.231 [2024-11-19 08:40:51.899613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.231 [2024-11-19 08:40:51.899619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:30.231 [2024-11-19 08:40:51.899628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.231 [2024-11-19 08:40:51.899635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.231 [2024-11-19 08:40:51.899642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:30.231 [2024-11-19 08:40:51.899649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.231 [2024-11-19 08:40:51.899664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:30.231 [2024-11-19 08:40:51.899672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899678] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.231 [2024-11-19 08:40:51.899694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.231 [2024-11-19 08:40:51.899700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.231 [2024-11-19 08:40:51.899715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.231 [2024-11-19 08:40:51.899735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.231 [2024-11-19 08:40:51.899741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.231 [2024-11-19 08:40:51.899749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.231 [2024-11-19 08:40:51.899755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.231 [2024-11-19 08:40:51.899765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.231 [2024-11-19 08:40:51.899773] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.231 [2024-11-19 08:40:51.899783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:30.231 [2024-11-19 08:40:51.899801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:30.231 [2024-11-19 08:40:51.899809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:30.231 [2024-11-19 08:40:51.899817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:30.231 [2024-11-19 08:40:51.899824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:30.231 [2024-11-19 08:40:51.899833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:30.231 [2024-11-19 08:40:51.899840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:30.231 [2024-11-19 08:40:51.899848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:30.231 [2024-11-19 08:40:51.899855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:30.231 [2024-11-19 08:40:51.899863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:30.231 [2024-11-19 08:40:51.899911] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.231 [2024-11-19 08:40:51.899923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.231 [2024-11-19 08:40:51.899939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.231 [2024-11-19 08:40:51.899946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.231 [2024-11-19 08:40:51.899955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.231 [2024-11-19 08:40:51.899965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.231 [2024-11-19 08:40:51.899980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.231 [2024-11-19 08:40:51.899987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:19:30.231 [2024-11-19 08:40:51.899997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.231 [2024-11-19 08:40:51.911848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.231 [2024-11-19 08:40:51.911884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.231 [2024-11-19 08:40:51.911895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.818 ms 00:19:30.231 [2024-11-19 08:40:51.911904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.231 [2024-11-19 08:40:51.912015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.231 [2024-11-19 08:40:51.912045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.231 [2024-11-19 08:40:51.912060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:30.231 [2024-11-19 08:40:51.912069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.231 [2024-11-19 08:40:51.922628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.231 [2024-11-19 08:40:51.922671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.231 [2024-11-19 08:40:51.922683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.558 ms 00:19:30.231 [2024-11-19 08:40:51.922692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.231 [2024-11-19 08:40:51.922764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.231 [2024-11-19 08:40:51.922786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.231 [2024-11-19 08:40:51.922802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.231 [2024-11-19 08:40:51.922817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.231 [2024-11-19 08:40:51.923240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.231 [2024-11-19 08:40:51.923268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.232 [2024-11-19 08:40:51.923276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:19:30.232 [2024-11-19 08:40:51.923285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.923395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.923419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.232 [2024-11-19 08:40:51.923427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:30.232 [2024-11-19 08:40:51.923435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.930380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.930417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.232 [2024-11-19 08:40:51.930427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:19:30.232 [2024-11-19 08:40:51.930436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.933023] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:30.232 [2024-11-19 08:40:51.933062] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.232 [2024-11-19 08:40:51.933081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.933090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.232 [2024-11-19 08:40:51.933099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:19:30.232 [2024-11-19 08:40:51.933108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.945327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.945387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.232 [2024-11-19 08:40:51.945399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.185 ms 00:19:30.232 [2024-11-19 08:40:51.945410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.947195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.947231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.232 [2024-11-19 08:40:51.947240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:19:30.232 [2024-11-19 08:40:51.947250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.948769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.948803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.232 [2024-11-19 08:40:51.948812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:19:30.232 [2024-11-19 08:40:51.948821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.949109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.949134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.232 [2024-11-19 08:40:51.949143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:19:30.232 [2024-11-19 08:40:51.949152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.987153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:51.987243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.232 [2024-11-19 08:40:51.987267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.047 ms 00:19:30.232 [2024-11-19 08:40:51.987287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:51.996453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:30.232 [2024-11-19 08:40:52.012996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.013061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.232 [2024-11-19 08:40:52.013077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:19:30.232 [2024-11-19 08:40:52.013084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.013181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.013191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.232 [2024-11-19 08:40:52.013204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.232 [2024-11-19 08:40:52.013210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.013268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.013279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.232 [2024-11-19 08:40:52.013288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:30.232 [2024-11-19 08:40:52.013304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.013330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.013338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.232 [2024-11-19 08:40:52.013351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.232 [2024-11-19 08:40:52.013360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.013392] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.232 [2024-11-19 08:40:52.013401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.013409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.232 [2024-11-19 08:40:52.013416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:30.232 [2024-11-19 08:40:52.013423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.017019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.017057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.232 [2024-11-19 08:40:52.017067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:19:30.232 [2024-11-19 08:40:52.017075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.017147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.232 [2024-11-19 08:40:52.017168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.232 [2024-11-19 08:40:52.017177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:30.232 [2024-11-19 08:40:52.017185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.232 [2024-11-19 08:40:52.018097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.232 [2024-11-19 08:40:52.019006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.560 ms, result 0 00:19:30.232 [2024-11-19 08:40:52.020103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.232 Some configs were skipped because the RPC state that can call them passed over. 00:19:30.232 08:40:52 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:30.492 [2024-11-19 08:40:52.243009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.492 [2024-11-19 08:40:52.243060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:30.492 [2024-11-19 08:40:52.243075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:19:30.492 [2024-11-19 08:40:52.243083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.492 [2024-11-19 08:40:52.243135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.694 ms, result 0 00:19:30.492 true 00:19:30.492 08:40:52 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:30.754 [2024-11-19 08:40:52.434530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.754 [2024-11-19 08:40:52.434579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:30.754 [2024-11-19 08:40:52.434591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:19:30.754 [2024-11-19 08:40:52.434600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.754 [2024-11-19 08:40:52.434633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.437 ms, result 0 00:19:30.754 true 00:19:30.754 08:40:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86727 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 86727 ']' 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 86727 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86727 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.754 killing process with pid 86727 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86727' 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 86727 00:19:30.754 08:40:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 86727 00:19:30.754 [2024-11-19 08:40:52.632134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.754 [2024-11-19 08:40:52.632198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.754 [2024-11-19 08:40:52.632228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.754 [2024-11-19 08:40:52.632236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.754 [2024-11-19 08:40:52.632264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:30.754 [2024-11-19 08:40:52.632939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.754 [2024-11-19 08:40:52.632960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.754 [2024-11-19 08:40:52.632969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:19:30.754 [2024-11-19 08:40:52.632981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.754 [2024-11-19 08:40:52.633231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.754 [2024-11-19 08:40:52.633250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.754 [2024-11-19 08:40:52.633259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:19:30.754 [2024-11-19 08:40:52.633268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.754 [2024-11-19 08:40:52.636472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.754 [2024-11-19 08:40:52.636512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.754 [2024-11-19 08:40:52.636522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:19:30.754 [2024-11-19 08:40:52.636543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.754 [2024-11-19 08:40:52.641776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.754 [2024-11-19 08:40:52.641812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.755 [2024-11-19 08:40:52.641838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.203 ms 00:19:30.755 [2024-11-19 08:40:52.641858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.643448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.643486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.755 [2024-11-19 08:40:52.643512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:19:30.755 [2024-11-19 08:40:52.643520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.648046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.648083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.755 [2024-11-19 08:40:52.648109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.503 ms 00:19:30.755 [2024-11-19 08:40:52.648121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.648230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.648243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.755 [2024-11-19 08:40:52.648260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:30.755 [2024-11-19 08:40:52.648270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.650725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.650790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:30.755 [2024-11-19 08:40:52.650799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:19:30.755 [2024-11-19 08:40:52.650810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.652158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.652196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:30.755 [2024-11-19 08:40:52.652205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:19:30.755 [2024-11-19 08:40:52.652212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.653401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.653440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:30.755 [2024-11-19 08:40:52.653448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:19:30.755 [2024-11-19 08:40:52.653456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.654659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.755 [2024-11-19 08:40:52.654697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:30.755 [2024-11-19 08:40:52.654705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:19:30.755 [2024-11-19 08:40:52.654713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.755 [2024-11-19 08:40:52.654774] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:30.755 [2024-11-19 08:40:52.654791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.654997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:30.755 [2024-11-19 08:40:52.655351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:30.756 [2024-11-19 08:40:52.655651] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:30.756 [2024-11-19 08:40:52.655658] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:19:30.756 [2024-11-19 08:40:52.655668] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:30.756 [2024-11-19 08:40:52.655676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:30.756 [2024-11-19 08:40:52.655687] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:30.756 [2024-11-19 08:40:52.655694] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:30.756 [2024-11-19 08:40:52.655702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:30.756 [2024-11-19 08:40:52.655710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:30.756 [2024-11-19 08:40:52.655730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:30.756 [2024-11-19 08:40:52.655737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:30.756 [2024-11-19 08:40:52.655746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:30.756 [2024-11-19 08:40:52.655753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.756 [2024-11-19 08:40:52.655762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:30.756 [2024-11-19 08:40:52.655770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:19:30.756 [2024-11-19 08:40:52.655780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.657502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-19 08:40:52.657526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.017 [2024-11-19 08:40:52.657550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:19:31.017 [2024-11-19 08:40:52.657559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.657663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-19 08:40:52.657674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.017 [2024-11-19 08:40:52.657681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:31.017 [2024-11-19 08:40:52.657689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.664018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.664043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.017 [2024-11-19 08:40:52.664051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.664060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.664135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.664147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.017 [2024-11-19 08:40:52.664154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.664165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.664207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.664224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.017 [2024-11-19 08:40:52.664232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.664246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.664263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.664273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.017 [2024-11-19 08:40:52.664279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.664287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.677595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.677661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.017 [2024-11-19 08:40:52.677672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.677681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.685772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.685841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.017 [2024-11-19 08:40:52.685851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.685863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.685910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.685920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.017 [2024-11-19 08:40:52.685930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.685939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.685980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.685991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.017 [2024-11-19 08:40:52.685999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.686008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.686078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.017 [2024-11-19 08:40:52.686090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.017 [2024-11-19 08:40:52.686099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.017 [2024-11-19 08:40:52.686108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-19 08:40:52.686147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.018 [2024-11-19 08:40:52.686161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.018 [2024-11-19 08:40:52.686169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.018 [2024-11-19 08:40:52.686180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-19 08:40:52.686224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.018 [2024-11-19 08:40:52.686249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.018 [2024-11-19 08:40:52.686256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.018 [2024-11-19 08:40:52.686267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-19 08:40:52.686311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.018 [2024-11-19 08:40:52.686321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.018 [2024-11-19 08:40:52.686328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.018 [2024-11-19 08:40:52.686337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-19 08:40:52.686471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.412 ms, result 0 00:19:31.276 08:40:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.276 [2024-11-19 08:40:53.004997] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:31.276 [2024-11-19 08:40:53.005148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86763 ] 00:19:31.276 [2024-11-19 08:40:53.159778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.536 [2024-11-19 08:40:53.184550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.536 [2024-11-19 08:40:53.286486] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.536 [2024-11-19 08:40:53.286558] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.537 [2024-11-19 08:40:53.440279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.537 [2024-11-19 08:40:53.440328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.537 [2024-11-19 08:40:53.440356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.537 [2024-11-19 08:40:53.440363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.442333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.442374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.799 [2024-11-19 08:40:53.442383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:19:31.799 [2024-11-19 08:40:53.442398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.442465] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.799 [2024-11-19 08:40:53.442662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.799 [2024-11-19 08:40:53.442685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.442696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.799 [2024-11-19 08:40:53.442704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:19:31.799 [2024-11-19 08:40:53.442734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.444176] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:31.799 [2024-11-19 08:40:53.446624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.446663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:31.799 [2024-11-19 08:40:53.446675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:19:31.799 [2024-11-19 08:40:53.446682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.446748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.446759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:31.799 [2024-11-19 08:40:53.446767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:31.799 [2024-11-19 08:40:53.446774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.453401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.453428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.799 [2024-11-19 08:40:53.453465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:19:31.799 [2024-11-19 08:40:53.453472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.453591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.453603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.799 [2024-11-19 08:40:53.453612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:31.799 [2024-11-19 08:40:53.453627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.453665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.453674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.799 [2024-11-19 08:40:53.453681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:31.799 [2024-11-19 08:40:53.453689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.453710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:31.799 [2024-11-19 08:40:53.455346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.455374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.799 [2024-11-19 08:40:53.455398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:19:31.799 [2024-11-19 08:40:53.455405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.455457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.455477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.799 [2024-11-19 08:40:53.455486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:31.799 [2024-11-19 08:40:53.455493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.455510] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:31.799 [2024-11-19 08:40:53.455528] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:31.799 [2024-11-19 08:40:53.455566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:31.799 [2024-11-19 08:40:53.455585] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:31.799 [2024-11-19 08:40:53.455674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.799 [2024-11-19 08:40:53.455687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.799 [2024-11-19 08:40:53.455696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:31.799 [2024-11-19 08:40:53.455707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.799 [2024-11-19 08:40:53.455716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.799 [2024-11-19 08:40:53.455723] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:31.799 [2024-11-19 08:40:53.455730] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.799 [2024-11-19 08:40:53.455749] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.799 [2024-11-19 08:40:53.455763] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.799 [2024-11-19 08:40:53.455774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.455784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.799 [2024-11-19 08:40:53.455792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:31.799 [2024-11-19 08:40:53.455805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.455877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.799 [2024-11-19 08:40:53.455886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.799 [2024-11-19 08:40:53.455893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:31.799 [2024-11-19 08:40:53.455900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.799 [2024-11-19 08:40:53.455981] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.799 [2024-11-19 08:40:53.455993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.799 [2024-11-19 08:40:53.456005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.799 [2024-11-19 08:40:53.456036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.799 [2024-11-19 08:40:53.456059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.799 [2024-11-19 08:40:53.456074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.799 [2024-11-19 08:40:53.456081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:31.799 [2024-11-19 08:40:53.456088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.799 [2024-11-19 08:40:53.456094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.799 [2024-11-19 08:40:53.456101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:31.799 [2024-11-19 08:40:53.456107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.799 [2024-11-19 08:40:53.456120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.799 [2024-11-19 08:40:53.456141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.799 [2024-11-19 08:40:53.456164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.799 [2024-11-19 08:40:53.456182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.799 [2024-11-19 08:40:53.456201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.799 [2024-11-19 08:40:53.456213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.799 [2024-11-19 08:40:53.456219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:31.799 [2024-11-19 08:40:53.456226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.799 [2024-11-19 08:40:53.456233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.799 [2024-11-19 08:40:53.456240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:31.799 [2024-11-19 08:40:53.456246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.799 [2024-11-19 08:40:53.456252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.799 [2024-11-19 08:40:53.456258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:31.799 [2024-11-19 08:40:53.456266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.800 [2024-11-19 08:40:53.456272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.800 [2024-11-19 08:40:53.456278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:31.800 [2024-11-19 08:40:53.456284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.800 [2024-11-19 08:40:53.456291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.800 [2024-11-19 08:40:53.456298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.800 [2024-11-19 08:40:53.456305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.800 [2024-11-19 08:40:53.456312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.800 [2024-11-19 08:40:53.456319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.800 [2024-11-19 08:40:53.456325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.800 [2024-11-19 08:40:53.456332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.800 [2024-11-19 08:40:53.456339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.800 [2024-11-19 08:40:53.456345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.800 [2024-11-19 08:40:53.456352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.800 [2024-11-19 08:40:53.456359] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.800 [2024-11-19 08:40:53.456368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:31.800 [2024-11-19 08:40:53.456385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:31.800 [2024-11-19 08:40:53.456392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:31.800 [2024-11-19 08:40:53.456398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:31.800 [2024-11-19 08:40:53.456405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:31.800 [2024-11-19 08:40:53.456412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:31.800 [2024-11-19 08:40:53.456419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:31.800 [2024-11-19 08:40:53.456426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:31.800 [2024-11-19 08:40:53.456432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:31.800 [2024-11-19 08:40:53.456447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:31.800 [2024-11-19 08:40:53.456482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.800 [2024-11-19 08:40:53.456490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.800 [2024-11-19 08:40:53.456510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.800 [2024-11-19 08:40:53.456517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.800 [2024-11-19 08:40:53.456523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.800 [2024-11-19 08:40:53.456532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.456539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.800 [2024-11-19 08:40:53.456547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:19:31.800 [2024-11-19 08:40:53.456554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.468468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.468510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.800 [2024-11-19 08:40:53.468537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.887 ms 00:19:31.800 [2024-11-19 08:40:53.468545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.468674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.468687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.800 [2024-11-19 08:40:53.468699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:31.800 [2024-11-19 08:40:53.468706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.492257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.492358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.800 [2024-11-19 08:40:53.492398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.558 ms 00:19:31.800 [2024-11-19 08:40:53.492425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.492649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.492763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.800 [2024-11-19 08:40:53.492796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:31.800 [2024-11-19 08:40:53.492852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.493565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.493633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.800 [2024-11-19 08:40:53.493663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:19:31.800 [2024-11-19 08:40:53.493690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.494070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.494141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.800 [2024-11-19 08:40:53.494180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:31.800 [2024-11-19 08:40:53.494229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.505082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.505144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.800 [2024-11-19 08:40:53.505186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.804 ms 00:19:31.800 [2024-11-19 08:40:53.505208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.508650] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:31.800 [2024-11-19 08:40:53.508710] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:31.800 [2024-11-19 08:40:53.508748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.508762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:31.800 [2024-11-19 08:40:53.508775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:19:31.800 [2024-11-19 08:40:53.508787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.525351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.525395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:31.800 [2024-11-19 08:40:53.525407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.515 ms 00:19:31.800 [2024-11-19 08:40:53.525416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.527426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.527458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:31.800 [2024-11-19 08:40:53.527484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:19:31.800 [2024-11-19 08:40:53.527492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.529152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.529206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:31.800 [2024-11-19 08:40:53.529215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:19:31.800 [2024-11-19 08:40:53.529222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.529525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.529550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:31.800 [2024-11-19 08:40:53.529560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:19:31.800 [2024-11-19 08:40:53.529568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.550440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.550519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:31.800 [2024-11-19 08:40:53.550534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.875 ms 00:19:31.800 [2024-11-19 08:40:53.550542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.556282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:31.800 [2024-11-19 08:40:53.572081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.572130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:31.800 [2024-11-19 08:40:53.572143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.489 ms 00:19:31.800 [2024-11-19 08:40:53.572173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.800 [2024-11-19 08:40:53.572268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.800 [2024-11-19 08:40:53.572280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:31.800 [2024-11-19 08:40:53.572288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:31.801 [2024-11-19 08:40:53.572305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.801 [2024-11-19 08:40:53.572363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.801 [2024-11-19 08:40:53.572372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:31.801 [2024-11-19 08:40:53.572379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:31.801 [2024-11-19 08:40:53.572386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.801 [2024-11-19 08:40:53.572406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.801 [2024-11-19 08:40:53.572414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:31.801 [2024-11-19 08:40:53.572422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:31.801 [2024-11-19 08:40:53.572428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.801 [2024-11-19 08:40:53.572462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:31.801 [2024-11-19 08:40:53.572471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.801 [2024-11-19 08:40:53.572478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:31.801 [2024-11-19 08:40:53.572486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:31.801 [2024-11-19 08:40:53.572493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.801 [2024-11-19 08:40:53.576406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.801 [2024-11-19 08:40:53.576445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:31.801 [2024-11-19 08:40:53.576456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:19:31.801 [2024-11-19 08:40:53.576464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.801 [2024-11-19 08:40:53.576554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.801 [2024-11-19 08:40:53.576565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:31.801 [2024-11-19 08:40:53.576574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:31.801 [2024-11-19 08:40:53.576582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.801 [2024-11-19 08:40:53.577511] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.801 [2024-11-19 08:40:53.578450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.231 ms, result 0 00:19:31.801 [2024-11-19 08:40:53.579175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.801 [2024-11-19 08:40:53.588992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.741  [2024-11-19T08:40:56.028Z] Copying: 28/256 [MB] (28 MBps) [2024-11-19T08:40:56.969Z] Copying: 54/256 [MB] (25 MBps) [2024-11-19T08:40:57.909Z] Copying: 80/256 [MB] (26 MBps) [2024-11-19T08:40:58.849Z] Copying: 107/256 [MB] (26 MBps) [2024-11-19T08:40:59.786Z] Copying: 134/256 [MB] (26 MBps) [2024-11-19T08:41:00.727Z] Copying: 161/256 [MB] (27 MBps) [2024-11-19T08:41:01.667Z] Copying: 189/256 [MB] (27 MBps) [2024-11-19T08:41:03.047Z] Copying: 216/256 [MB] (27 MBps) [2024-11-19T08:41:03.308Z] Copying: 244/256 [MB] (27 MBps) [2024-11-19T08:41:03.571Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-19 08:41:03.476110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.664 [2024-11-19 08:41:03.479450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.479545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.664 [2024-11-19 08:41:03.479574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:41.664 [2024-11-19 08:41:03.479596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.479647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:41.664 [2024-11-19 08:41:03.480910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.480963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.664 [2024-11-19 08:41:03.480985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:19:41.664 [2024-11-19 08:41:03.481030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.481616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.481665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.664 [2024-11-19 08:41:03.481687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:19:41.664 [2024-11-19 08:41:03.481753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.487964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.488008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.664 [2024-11-19 08:41:03.488044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:19:41.664 [2024-11-19 08:41:03.488058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.497308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.497365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.664 [2024-11-19 08:41:03.497380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.191 ms 00:19:41.664 [2024-11-19 08:41:03.497389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.499534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.499587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.664 [2024-11-19 08:41:03.499600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:19:41.664 [2024-11-19 08:41:03.499610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.504335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.504401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.664 [2024-11-19 08:41:03.504414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:19:41.664 [2024-11-19 08:41:03.504436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.504578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.504590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.664 [2024-11-19 08:41:03.504600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:41.664 [2024-11-19 08:41:03.504608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.507596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.507643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:41.664 [2024-11-19 08:41:03.507653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:19:41.664 [2024-11-19 08:41:03.507661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.509771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.664 [2024-11-19 08:41:03.509814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:41.664 [2024-11-19 08:41:03.509841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:19:41.664 [2024-11-19 08:41:03.509848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.664 [2024-11-19 08:41:03.511482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.665 [2024-11-19 08:41:03.511525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.665 [2024-11-19 08:41:03.511534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:19:41.665 [2024-11-19 08:41:03.511542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.665 [2024-11-19 08:41:03.513195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.665 [2024-11-19 08:41:03.513241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.665 [2024-11-19 08:41:03.513253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:19:41.665 [2024-11-19 08:41:03.513260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.665 [2024-11-19 08:41:03.513293] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.665 [2024-11-19 08:41:03.513308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.665 [2024-11-19 08:41:03.513990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.513997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.666 [2024-11-19 08:41:03.514139] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.666 [2024-11-19 08:41:03.514146] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d4ca7f05-e3be-4ef5-9cfc-3c0158818b5e 00:19:41.666 [2024-11-19 08:41:03.514160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.666 [2024-11-19 08:41:03.514168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.666 [2024-11-19 08:41:03.514176] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.666 [2024-11-19 08:41:03.514183] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.666 [2024-11-19 08:41:03.514190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.666 [2024-11-19 08:41:03.514197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.666 [2024-11-19 08:41:03.514204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.666 [2024-11-19 08:41:03.514210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.666 [2024-11-19 08:41:03.514216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.666 [2024-11-19 08:41:03.514225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.666 [2024-11-19 08:41:03.514237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.666 [2024-11-19 08:41:03.514245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:19:41.666 [2024-11-19 08:41:03.514259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.516785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.666 [2024-11-19 08:41:03.516820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.666 [2024-11-19 08:41:03.516837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.509 ms 00:19:41.666 [2024-11-19 08:41:03.516845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.517000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.666 [2024-11-19 08:41:03.517010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.666 [2024-11-19 08:41:03.517018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:41.666 [2024-11-19 08:41:03.517026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.525529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.525557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.666 [2024-11-19 08:41:03.525568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.525577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.525653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.525672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.666 [2024-11-19 08:41:03.525681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.525690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.525751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.525764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.666 [2024-11-19 08:41:03.525772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.525781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.525807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.525820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.666 [2024-11-19 08:41:03.525829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.525837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.542930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.542984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.666 [2024-11-19 08:41:03.542996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.543004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.666 [2024-11-19 08:41:03.553105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.666 [2024-11-19 08:41:03.553183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.666 [2024-11-19 08:41:03.553238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.666 [2024-11-19 08:41:03.553335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:41.666 [2024-11-19 08:41:03.553414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.666 [2024-11-19 08:41:03.553497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.666 [2024-11-19 08:41:03.553555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.666 [2024-11-19 08:41:03.553573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.666 [2024-11-19 08:41:03.553580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.666 [2024-11-19 08:41:03.553771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.403 ms, result 0 00:19:41.926 00:19:41.926 00:19:41.926 08:41:03 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:42.497 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:42.497 08:41:04 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86727 00:19:42.497 08:41:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 86727 ']' 00:19:42.497 08:41:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 86727 00:19:42.497 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86727) - No such process 00:19:42.497 Process with pid 86727 is not found 00:19:42.497 08:41:04 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 86727 is not found' 00:19:42.497 00:19:42.497 real 0m52.822s 00:19:42.497 user 1m17.663s 00:19:42.497 sys 0m5.654s 00:19:42.497 08:41:04 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.497 08:41:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:42.497 ************************************ 00:19:42.497 END TEST ftl_trim 00:19:42.497 ************************************ 00:19:42.497 08:41:04 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:42.497 08:41:04 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:42.497 08:41:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.497 08:41:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:42.497 ************************************ 00:19:42.497 START TEST ftl_restore 00:19:42.497 ************************************ 00:19:42.497 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:42.758 * Looking for test storage... 00:19:42.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.758 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.758 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:19:42.758 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.758 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.758 08:41:04 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.759 08:41:04 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.759 --rc genhtml_branch_coverage=1 00:19:42.759 --rc genhtml_function_coverage=1 00:19:42.759 --rc genhtml_legend=1 00:19:42.759 --rc geninfo_all_blocks=1 00:19:42.759 --rc geninfo_unexecuted_blocks=1 00:19:42.759 00:19:42.759 ' 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.759 --rc genhtml_branch_coverage=1 00:19:42.759 --rc genhtml_function_coverage=1 00:19:42.759 --rc genhtml_legend=1 00:19:42.759 --rc geninfo_all_blocks=1 00:19:42.759 --rc geninfo_unexecuted_blocks=1 00:19:42.759 00:19:42.759 ' 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.759 --rc genhtml_branch_coverage=1 00:19:42.759 --rc genhtml_function_coverage=1 00:19:42.759 --rc genhtml_legend=1 00:19:42.759 --rc geninfo_all_blocks=1 00:19:42.759 --rc geninfo_unexecuted_blocks=1 00:19:42.759 00:19:42.759 ' 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.759 --rc genhtml_branch_coverage=1 00:19:42.759 --rc genhtml_function_coverage=1 00:19:42.759 --rc genhtml_legend=1 00:19:42.759 --rc geninfo_all_blocks=1 00:19:42.759 --rc geninfo_unexecuted_blocks=1 00:19:42.759 00:19:42.759 ' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.ZYhg3ryniO 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86949 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.759 08:41:04 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86949 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 86949 ']' 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.759 08:41:04 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:43.020 [2024-11-19 08:41:04.748513] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:43.020 [2024-11-19 08:41:04.749259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86949 ] 00:19:43.020 [2024-11-19 08:41:04.906500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.280 [2024-11-19 08:41:04.933674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.852 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.852 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:43.852 08:41:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:43.852 08:41:05 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:43.852 08:41:05 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:43.852 08:41:05 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:43.852 08:41:05 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:43.852 08:41:05 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.112 08:41:05 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.112 08:41:05 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:44.112 08:41:05 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.112 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:44.112 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.112 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:44.112 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:44.112 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.112 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.112 { 00:19:44.112 "name": "nvme0n1", 00:19:44.112 "aliases": [ 00:19:44.112 "87521f11-17b9-481e-ae7d-4903ff210184" 00:19:44.112 ], 00:19:44.112 "product_name": "NVMe disk", 00:19:44.112 "block_size": 4096, 00:19:44.112 "num_blocks": 1310720, 00:19:44.112 "uuid": "87521f11-17b9-481e-ae7d-4903ff210184", 00:19:44.112 "numa_id": -1, 00:19:44.112 "assigned_rate_limits": { 00:19:44.112 "rw_ios_per_sec": 0, 00:19:44.112 "rw_mbytes_per_sec": 0, 00:19:44.112 "r_mbytes_per_sec": 0, 00:19:44.112 "w_mbytes_per_sec": 0 00:19:44.112 }, 00:19:44.112 "claimed": true, 00:19:44.112 "claim_type": "read_many_write_one", 00:19:44.112 "zoned": false, 00:19:44.112 "supported_io_types": { 00:19:44.112 "read": true, 00:19:44.112 "write": true, 00:19:44.112 "unmap": true, 00:19:44.112 "flush": true, 00:19:44.112 "reset": true, 00:19:44.112 "nvme_admin": true, 00:19:44.112 "nvme_io": true, 00:19:44.112 "nvme_io_md": false, 00:19:44.112 "write_zeroes": true, 00:19:44.112 "zcopy": false, 00:19:44.112 "get_zone_info": false, 00:19:44.112 "zone_management": false, 00:19:44.112 "zone_append": false, 00:19:44.112 "compare": true, 00:19:44.112 "compare_and_write": false, 00:19:44.112 "abort": true, 00:19:44.112 "seek_hole": false, 00:19:44.112 "seek_data": false, 00:19:44.112 "copy": true, 00:19:44.112 "nvme_iov_md": false 00:19:44.112 }, 00:19:44.112 "driver_specific": { 00:19:44.112 "nvme": [ 00:19:44.112 { 00:19:44.112 "pci_address": "0000:00:11.0", 00:19:44.112 "trid": { 00:19:44.112 "trtype": "PCIe", 00:19:44.112 "traddr": "0000:00:11.0" 00:19:44.112 }, 00:19:44.112 "ctrlr_data": { 00:19:44.112 "cntlid": 0, 00:19:44.113 "vendor_id": "0x1b36", 00:19:44.113 "model_number": "QEMU NVMe Ctrl", 00:19:44.113 "serial_number": "12341", 00:19:44.113 "firmware_revision": "8.0.0", 00:19:44.113 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.113 "oacs": { 00:19:44.113 "security": 0, 00:19:44.113 "format": 1, 00:19:44.113 "firmware": 0, 00:19:44.113 "ns_manage": 1 00:19:44.113 }, 00:19:44.113 "multi_ctrlr": false, 00:19:44.113 "ana_reporting": false 00:19:44.113 }, 00:19:44.113 "vs": { 00:19:44.113 "nvme_version": "1.4" 00:19:44.113 }, 00:19:44.113 "ns_data": { 00:19:44.113 "id": 1, 00:19:44.113 "can_share": false 00:19:44.113 } 00:19:44.113 } 00:19:44.113 ], 00:19:44.113 "mp_policy": "active_passive" 00:19:44.113 } 00:19:44.113 } 00:19:44.113 ]' 00:19:44.113 08:41:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.373 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.373 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.373 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:44.373 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:44.373 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:44.373 08:41:06 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:44.373 08:41:06 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.373 08:41:06 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:44.373 08:41:06 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.373 08:41:06 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.634 08:41:06 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3f3e11eb-d663-4ea0-9d64-a39c34dd8c67 00:19:44.634 08:41:06 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.634 08:41:06 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f3e11eb-d663-4ea0-9d64-a39c34dd8c67 00:19:44.634 08:41:06 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:44.894 08:41:06 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=780b27f9-6687-4b93-bae5-2bf19eb228ca 00:19:44.894 08:41:06 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 780b27f9-6687-4b93-bae5-2bf19eb228ca 00:19:45.153 08:41:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.153 08:41:06 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:45.154 08:41:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.154 08:41:06 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.154 08:41:06 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.154 08:41:06 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.154 08:41:06 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:45.154 08:41:06 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.154 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.154 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.154 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:45.154 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:45.154 08:41:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.413 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.414 { 00:19:45.414 "name": "c1324637-a8e1-4cd7-b462-a531045ca205", 00:19:45.414 "aliases": [ 00:19:45.414 "lvs/nvme0n1p0" 00:19:45.414 ], 00:19:45.414 "product_name": "Logical Volume", 00:19:45.414 "block_size": 4096, 00:19:45.414 "num_blocks": 26476544, 00:19:45.414 "uuid": "c1324637-a8e1-4cd7-b462-a531045ca205", 00:19:45.414 "assigned_rate_limits": { 00:19:45.414 "rw_ios_per_sec": 0, 00:19:45.414 "rw_mbytes_per_sec": 0, 00:19:45.414 "r_mbytes_per_sec": 0, 00:19:45.414 "w_mbytes_per_sec": 0 00:19:45.414 }, 00:19:45.414 "claimed": false, 00:19:45.414 "zoned": false, 00:19:45.414 "supported_io_types": { 00:19:45.414 "read": true, 00:19:45.414 "write": true, 00:19:45.414 "unmap": true, 00:19:45.414 "flush": false, 00:19:45.414 "reset": true, 00:19:45.414 "nvme_admin": false, 00:19:45.414 "nvme_io": false, 00:19:45.414 "nvme_io_md": false, 00:19:45.414 "write_zeroes": true, 00:19:45.414 "zcopy": false, 00:19:45.414 "get_zone_info": false, 00:19:45.414 "zone_management": false, 00:19:45.414 "zone_append": false, 00:19:45.414 "compare": false, 00:19:45.414 "compare_and_write": false, 00:19:45.414 "abort": false, 00:19:45.414 "seek_hole": true, 00:19:45.414 "seek_data": true, 00:19:45.414 "copy": false, 00:19:45.414 "nvme_iov_md": false 00:19:45.414 }, 00:19:45.414 "driver_specific": { 00:19:45.414 "lvol": { 00:19:45.414 "lvol_store_uuid": "780b27f9-6687-4b93-bae5-2bf19eb228ca", 00:19:45.414 "base_bdev": "nvme0n1", 00:19:45.414 "thin_provision": true, 00:19:45.414 "num_allocated_clusters": 0, 00:19:45.414 "snapshot": false, 00:19:45.414 "clone": false, 00:19:45.414 "esnap_clone": false 00:19:45.414 } 00:19:45.414 } 00:19:45.414 } 00:19:45.414 ]' 00:19:45.414 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.414 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.414 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.414 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:45.414 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:45.414 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:45.414 08:41:07 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.414 08:41:07 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.414 08:41:07 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:45.674 08:41:07 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:45.674 08:41:07 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:45.674 08:41:07 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.674 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.674 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.674 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:45.674 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:45.674 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1324637-a8e1-4cd7-b462-a531045ca205 00:19:45.934 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.934 { 00:19:45.934 "name": "c1324637-a8e1-4cd7-b462-a531045ca205", 00:19:45.934 "aliases": [ 00:19:45.934 "lvs/nvme0n1p0" 00:19:45.934 ], 00:19:45.934 "product_name": "Logical Volume", 00:19:45.934 "block_size": 4096, 00:19:45.934 "num_blocks": 26476544, 00:19:45.934 "uuid": "c1324637-a8e1-4cd7-b462-a531045ca205", 00:19:45.934 "assigned_rate_limits": { 00:19:45.934 "rw_ios_per_sec": 0, 00:19:45.934 "rw_mbytes_per_sec": 0, 00:19:45.934 "r_mbytes_per_sec": 0, 00:19:45.934 "w_mbytes_per_sec": 0 00:19:45.934 }, 00:19:45.934 "claimed": false, 00:19:45.934 "zoned": false, 00:19:45.934 "supported_io_types": { 00:19:45.934 "read": true, 00:19:45.934 "write": true, 00:19:45.934 "unmap": true, 00:19:45.934 "flush": false, 00:19:45.934 "reset": true, 00:19:45.934 "nvme_admin": false, 00:19:45.934 "nvme_io": false, 00:19:45.934 "nvme_io_md": false, 00:19:45.934 "write_zeroes": true, 00:19:45.934 "zcopy": false, 00:19:45.935 "get_zone_info": false, 00:19:45.935 "zone_management": false, 00:19:45.935 "zone_append": false, 00:19:45.935 "compare": false, 00:19:45.935 "compare_and_write": false, 00:19:45.935 "abort": false, 00:19:45.935 "seek_hole": true, 00:19:45.935 "seek_data": true, 00:19:45.935 "copy": false, 00:19:45.935 "nvme_iov_md": false 00:19:45.935 }, 00:19:45.935 "driver_specific": { 00:19:45.935 "lvol": { 00:19:45.935 "lvol_store_uuid": "780b27f9-6687-4b93-bae5-2bf19eb228ca", 00:19:45.935 "base_bdev": "nvme0n1", 00:19:45.935 "thin_provision": true, 00:19:45.935 "num_allocated_clusters": 0, 00:19:45.935 "snapshot": false, 00:19:45.935 "clone": false, 00:19:45.935 "esnap_clone": false 00:19:45.935 } 00:19:45.935 } 00:19:45.935 } 00:19:45.935 ]' 00:19:45.935 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.935 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.935 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.935 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:45.935 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:45.935 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:45.935 08:41:07 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:45.935 08:41:07 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.195 08:41:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:46.195 08:41:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c1324637-a8e1-4cd7-b462-a531045ca205 00:19:46.195 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c1324637-a8e1-4cd7-b462-a531045ca205 00:19:46.195 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.195 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:46.195 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:46.195 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1324637-a8e1-4cd7-b462-a531045ca205 00:19:46.455 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.455 { 00:19:46.455 "name": "c1324637-a8e1-4cd7-b462-a531045ca205", 00:19:46.455 "aliases": [ 00:19:46.455 "lvs/nvme0n1p0" 00:19:46.455 ], 00:19:46.455 "product_name": "Logical Volume", 00:19:46.455 "block_size": 4096, 00:19:46.455 "num_blocks": 26476544, 00:19:46.455 "uuid": "c1324637-a8e1-4cd7-b462-a531045ca205", 00:19:46.455 "assigned_rate_limits": { 00:19:46.455 "rw_ios_per_sec": 0, 00:19:46.455 "rw_mbytes_per_sec": 0, 00:19:46.455 "r_mbytes_per_sec": 0, 00:19:46.455 "w_mbytes_per_sec": 0 00:19:46.455 }, 00:19:46.455 "claimed": false, 00:19:46.455 "zoned": false, 00:19:46.455 "supported_io_types": { 00:19:46.455 "read": true, 00:19:46.455 "write": true, 00:19:46.455 "unmap": true, 00:19:46.455 "flush": false, 00:19:46.455 "reset": true, 00:19:46.455 "nvme_admin": false, 00:19:46.455 "nvme_io": false, 00:19:46.455 "nvme_io_md": false, 00:19:46.455 "write_zeroes": true, 00:19:46.455 "zcopy": false, 00:19:46.455 "get_zone_info": false, 00:19:46.455 "zone_management": false, 00:19:46.455 "zone_append": false, 00:19:46.455 "compare": false, 00:19:46.455 "compare_and_write": false, 00:19:46.455 "abort": false, 00:19:46.455 "seek_hole": true, 00:19:46.455 "seek_data": true, 00:19:46.455 "copy": false, 00:19:46.455 "nvme_iov_md": false 00:19:46.455 }, 00:19:46.455 "driver_specific": { 00:19:46.455 "lvol": { 00:19:46.455 "lvol_store_uuid": "780b27f9-6687-4b93-bae5-2bf19eb228ca", 00:19:46.455 "base_bdev": "nvme0n1", 00:19:46.455 "thin_provision": true, 00:19:46.455 "num_allocated_clusters": 0, 00:19:46.456 "snapshot": false, 00:19:46.456 "clone": false, 00:19:46.456 "esnap_clone": false 00:19:46.456 } 00:19:46.456 } 00:19:46.456 } 00:19:46.456 ]' 00:19:46.456 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.456 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.456 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.456 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.456 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.456 08:41:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c1324637-a8e1-4cd7-b462-a531045ca205 --l2p_dram_limit 10' 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:46.456 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:46.456 08:41:08 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c1324637-a8e1-4cd7-b462-a531045ca205 --l2p_dram_limit 10 -c nvc0n1p0 00:19:46.720 [2024-11-19 08:41:08.420673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.720 [2024-11-19 08:41:08.420754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.720 [2024-11-19 08:41:08.420768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.720 [2024-11-19 08:41:08.420777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.720 [2024-11-19 08:41:08.420846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.720 [2024-11-19 08:41:08.420859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.720 [2024-11-19 08:41:08.420867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:46.720 [2024-11-19 08:41:08.420880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.720 [2024-11-19 08:41:08.420899] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.720 [2024-11-19 08:41:08.421188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.720 [2024-11-19 08:41:08.421210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.720 [2024-11-19 08:41:08.421220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.720 [2024-11-19 08:41:08.421228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:46.720 [2024-11-19 08:41:08.421236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.720 [2024-11-19 08:41:08.421265] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID da080e19-877f-4306-b925-6630d39cb4d4 00:19:46.720 [2024-11-19 08:41:08.422662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.720 [2024-11-19 08:41:08.422683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.720 [2024-11-19 08:41:08.422695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:46.720 [2024-11-19 08:41:08.422703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.720 [2024-11-19 08:41:08.430084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.430110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.721 [2024-11-19 08:41:08.430124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.333 ms 00:19:46.721 [2024-11-19 08:41:08.430131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.430258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.430281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.721 [2024-11-19 08:41:08.430291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:46.721 [2024-11-19 08:41:08.430298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.430362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.430370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.721 [2024-11-19 08:41:08.430380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:46.721 [2024-11-19 08:41:08.430386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.430420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.721 [2024-11-19 08:41:08.432102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.432128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.721 [2024-11-19 08:41:08.432145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:19:46.721 [2024-11-19 08:41:08.432154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.432184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.432194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.721 [2024-11-19 08:41:08.432201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:46.721 [2024-11-19 08:41:08.432211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.432227] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.721 [2024-11-19 08:41:08.432347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.721 [2024-11-19 08:41:08.432358] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.721 [2024-11-19 08:41:08.432369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.721 [2024-11-19 08:41:08.432379] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432400] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.721 [2024-11-19 08:41:08.432416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.721 [2024-11-19 08:41:08.432425] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.721 [2024-11-19 08:41:08.432443] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.721 [2024-11-19 08:41:08.432450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.432464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.721 [2024-11-19 08:41:08.432472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:19:46.721 [2024-11-19 08:41:08.432481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.432545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.721 [2024-11-19 08:41:08.432556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.721 [2024-11-19 08:41:08.432569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:46.721 [2024-11-19 08:41:08.432578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.721 [2024-11-19 08:41:08.432655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.721 [2024-11-19 08:41:08.432666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.721 [2024-11-19 08:41:08.432673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.721 [2024-11-19 08:41:08.432708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.721 [2024-11-19 08:41:08.432744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.721 [2024-11-19 08:41:08.432774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.721 [2024-11-19 08:41:08.432784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.721 [2024-11-19 08:41:08.432790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.721 [2024-11-19 08:41:08.432802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.721 [2024-11-19 08:41:08.432809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.721 [2024-11-19 08:41:08.432817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.721 [2024-11-19 08:41:08.432832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.721 [2024-11-19 08:41:08.432853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.721 [2024-11-19 08:41:08.432875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.721 [2024-11-19 08:41:08.432895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.721 [2024-11-19 08:41:08.432920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.721 [2024-11-19 08:41:08.432935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.721 [2024-11-19 08:41:08.432941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.721 [2024-11-19 08:41:08.432955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.721 [2024-11-19 08:41:08.432963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.721 [2024-11-19 08:41:08.432969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.721 [2024-11-19 08:41:08.432977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.721 [2024-11-19 08:41:08.432984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.721 [2024-11-19 08:41:08.432992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.432998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.721 [2024-11-19 08:41:08.433007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.721 [2024-11-19 08:41:08.433013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.433021] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.721 [2024-11-19 08:41:08.433028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.721 [2024-11-19 08:41:08.433039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.721 [2024-11-19 08:41:08.433055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.721 [2024-11-19 08:41:08.433065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.721 [2024-11-19 08:41:08.433072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.721 [2024-11-19 08:41:08.433079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.721 [2024-11-19 08:41:08.433086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.721 [2024-11-19 08:41:08.433094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.721 [2024-11-19 08:41:08.433101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.721 [2024-11-19 08:41:08.433114] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.721 [2024-11-19 08:41:08.433129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.721 [2024-11-19 08:41:08.433142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.721 [2024-11-19 08:41:08.433149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.721 [2024-11-19 08:41:08.433157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.721 [2024-11-19 08:41:08.433164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.721 [2024-11-19 08:41:08.433172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.721 [2024-11-19 08:41:08.433179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.721 [2024-11-19 08:41:08.433190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.721 [2024-11-19 08:41:08.433197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.722 [2024-11-19 08:41:08.433206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.722 [2024-11-19 08:41:08.433213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.722 [2024-11-19 08:41:08.433221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.722 [2024-11-19 08:41:08.433228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.722 [2024-11-19 08:41:08.433236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.722 [2024-11-19 08:41:08.433243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.722 [2024-11-19 08:41:08.433251] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.722 [2024-11-19 08:41:08.433259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.722 [2024-11-19 08:41:08.433268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.722 [2024-11-19 08:41:08.433275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.722 [2024-11-19 08:41:08.433283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.722 [2024-11-19 08:41:08.433291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.722 [2024-11-19 08:41:08.433300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.722 [2024-11-19 08:41:08.433307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.722 [2024-11-19 08:41:08.433320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:19:46.722 [2024-11-19 08:41:08.433328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.722 [2024-11-19 08:41:08.433384] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.722 [2024-11-19 08:41:08.433393] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:50.976 [2024-11-19 08:41:12.201167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.201226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:50.976 [2024-11-19 08:41:12.201242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3775.046 ms 00:19:50.976 [2024-11-19 08:41:12.201250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.212235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.212279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.976 [2024-11-19 08:41:12.212295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.919 ms 00:19:50.976 [2024-11-19 08:41:12.212320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.212434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.212447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.976 [2024-11-19 08:41:12.212457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:50.976 [2024-11-19 08:41:12.212470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.222768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.222804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.976 [2024-11-19 08:41:12.222817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.276 ms 00:19:50.976 [2024-11-19 08:41:12.222825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.222862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.222870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.976 [2024-11-19 08:41:12.222880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:50.976 [2024-11-19 08:41:12.222887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.223340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.223359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.976 [2024-11-19 08:41:12.223369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:19:50.976 [2024-11-19 08:41:12.223376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.223471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.223491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.976 [2024-11-19 08:41:12.223500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:50.976 [2024-11-19 08:41:12.223509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.230324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.230355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.976 [2024-11-19 08:41:12.230367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.805 ms 00:19:50.976 [2024-11-19 08:41:12.230375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.237307] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:50.976 [2024-11-19 08:41:12.240367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.240393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.976 [2024-11-19 08:41:12.240402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.948 ms 00:19:50.976 [2024-11-19 08:41:12.240411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.330766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.330832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:50.976 [2024-11-19 08:41:12.330848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.501 ms 00:19:50.976 [2024-11-19 08:41:12.330859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.331027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.331040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.976 [2024-11-19 08:41:12.331060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:50.976 [2024-11-19 08:41:12.331070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.334644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.334684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:50.976 [2024-11-19 08:41:12.334695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.563 ms 00:19:50.976 [2024-11-19 08:41:12.334708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.337355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.337391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:50.976 [2024-11-19 08:41:12.337401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:19:50.976 [2024-11-19 08:41:12.337410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.337667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.337690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.976 [2024-11-19 08:41:12.337699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:19:50.976 [2024-11-19 08:41:12.337712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.375528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.375575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:50.976 [2024-11-19 08:41:12.375588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.815 ms 00:19:50.976 [2024-11-19 08:41:12.375611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.380031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.380071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:50.976 [2024-11-19 08:41:12.380081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:19:50.976 [2024-11-19 08:41:12.380091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.976 [2024-11-19 08:41:12.383193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.976 [2024-11-19 08:41:12.383231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:50.976 [2024-11-19 08:41:12.383241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.075 ms 00:19:50.977 [2024-11-19 08:41:12.383250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.386930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.386967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.977 [2024-11-19 08:41:12.386977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:19:50.977 [2024-11-19 08:41:12.386989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.387024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.387035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.977 [2024-11-19 08:41:12.387044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.977 [2024-11-19 08:41:12.387053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.387117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.387136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.977 [2024-11-19 08:41:12.387146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:50.977 [2024-11-19 08:41:12.387155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.388332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3974.707 ms, result 0 00:19:50.977 { 00:19:50.977 "name": "ftl0", 00:19:50.977 "uuid": "da080e19-877f-4306-b925-6630d39cb4d4" 00:19:50.977 } 00:19:50.977 08:41:12 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:50.977 08:41:12 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:50.977 08:41:12 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:50.977 08:41:12 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:50.977 [2024-11-19 08:41:12.804157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.804201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.977 [2024-11-19 08:41:12.804214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.977 [2024-11-19 08:41:12.804224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.804248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.977 [2024-11-19 08:41:12.804954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.804980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.977 [2024-11-19 08:41:12.804989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:19:50.977 [2024-11-19 08:41:12.805001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.805201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.805220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.977 [2024-11-19 08:41:12.805228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:50.977 [2024-11-19 08:41:12.805237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.807521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.807545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.977 [2024-11-19 08:41:12.807553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:19:50.977 [2024-11-19 08:41:12.807562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.812074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.812108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.977 [2024-11-19 08:41:12.812116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.506 ms 00:19:50.977 [2024-11-19 08:41:12.812125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.813917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.813959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.977 [2024-11-19 08:41:12.813969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:19:50.977 [2024-11-19 08:41:12.813978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.818814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.818869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.977 [2024-11-19 08:41:12.818879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:19:50.977 [2024-11-19 08:41:12.818889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.818990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.819003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.977 [2024-11-19 08:41:12.819011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:50.977 [2024-11-19 08:41:12.819026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.820862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.820899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.977 [2024-11-19 08:41:12.820908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:19:50.977 [2024-11-19 08:41:12.820917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.822425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.822466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.977 [2024-11-19 08:41:12.822475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:19:50.977 [2024-11-19 08:41:12.822483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.823717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.823767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.977 [2024-11-19 08:41:12.823787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.210 ms 00:19:50.977 [2024-11-19 08:41:12.823796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.824976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.977 [2024-11-19 08:41:12.825012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.977 [2024-11-19 08:41:12.825021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:19:50.977 [2024-11-19 08:41:12.825029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.977 [2024-11-19 08:41:12.825056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.977 [2024-11-19 08:41:12.825071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.977 [2024-11-19 08:41:12.825348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.978 [2024-11-19 08:41:12.825996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.978 [2024-11-19 08:41:12.826003] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da080e19-877f-4306-b925-6630d39cb4d4 00:19:50.978 [2024-11-19 08:41:12.826012] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.978 [2024-11-19 08:41:12.826021] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.978 [2024-11-19 08:41:12.826035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.978 [2024-11-19 08:41:12.826042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.978 [2024-11-19 08:41:12.826050] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.978 [2024-11-19 08:41:12.826058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.978 [2024-11-19 08:41:12.826070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.978 [2024-11-19 08:41:12.826077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.978 [2024-11-19 08:41:12.826084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.978 [2024-11-19 08:41:12.826091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.978 [2024-11-19 08:41:12.826100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.978 [2024-11-19 08:41:12.826108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:19:50.978 [2024-11-19 08:41:12.826117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.978 [2024-11-19 08:41:12.827805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.978 [2024-11-19 08:41:12.827831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.978 [2024-11-19 08:41:12.827840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:19:50.978 [2024-11-19 08:41:12.827849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.978 [2024-11-19 08:41:12.827952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.978 [2024-11-19 08:41:12.827964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.978 [2024-11-19 08:41:12.827981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:50.978 [2024-11-19 08:41:12.827989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.978 [2024-11-19 08:41:12.834225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.978 [2024-11-19 08:41:12.834258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.979 [2024-11-19 08:41:12.834266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.834278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.834325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.834336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.979 [2024-11-19 08:41:12.834343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.834360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.834431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.834448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.979 [2024-11-19 08:41:12.834456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.834465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.834484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.834496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.979 [2024-11-19 08:41:12.834503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.834512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.848282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.848338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.979 [2024-11-19 08:41:12.848349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.848359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.979 [2024-11-19 08:41:12.857112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.979 [2024-11-19 08:41:12.857218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.979 [2024-11-19 08:41:12.857302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.979 [2024-11-19 08:41:12.857412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.979 [2024-11-19 08:41:12.857480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.979 [2024-11-19 08:41:12.857561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.979 [2024-11-19 08:41:12.857628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.979 [2024-11-19 08:41:12.857635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.979 [2024-11-19 08:41:12.857645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.979 [2024-11-19 08:41:12.857779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.677 ms, result 0 00:19:50.979 true 00:19:51.239 08:41:12 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86949 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 86949 ']' 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 86949 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86949 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.239 killing process with pid 86949 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86949' 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 86949 00:19:51.239 08:41:12 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 86949 00:19:55.444 08:41:17 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:58.742 262144+0 records in 00:19:58.742 262144+0 records out 00:19:58.742 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.25241 s, 330 MB/s 00:19:58.742 08:41:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:00.124 08:41:21 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:00.124 [2024-11-19 08:41:22.013608] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:00.124 [2024-11-19 08:41:22.013732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87186 ] 00:20:00.385 [2024-11-19 08:41:22.166837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.385 [2024-11-19 08:41:22.193462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.647 [2024-11-19 08:41:22.295307] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.647 [2024-11-19 08:41:22.295380] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.647 [2024-11-19 08:41:22.448834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.448885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.647 [2024-11-19 08:41:22.448900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.647 [2024-11-19 08:41:22.448923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.448980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.448991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.647 [2024-11-19 08:41:22.448999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:00.647 [2024-11-19 08:41:22.449007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.449027] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.647 [2024-11-19 08:41:22.449224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.647 [2024-11-19 08:41:22.449256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.449264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.647 [2024-11-19 08:41:22.449272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:20:00.647 [2024-11-19 08:41:22.449281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.450660] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.647 [2024-11-19 08:41:22.453106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.453140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.647 [2024-11-19 08:41:22.453159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.453 ms 00:20:00.647 [2024-11-19 08:41:22.453166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.453220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.453232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.647 [2024-11-19 08:41:22.453241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:00.647 [2024-11-19 08:41:22.453247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.459875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.459909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.647 [2024-11-19 08:41:22.459917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.597 ms 00:20:00.647 [2024-11-19 08:41:22.459927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.460010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.460020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.647 [2024-11-19 08:41:22.460029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:00.647 [2024-11-19 08:41:22.460036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.460085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.460094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.647 [2024-11-19 08:41:22.460102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:00.647 [2024-11-19 08:41:22.460110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.460135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.647 [2024-11-19 08:41:22.461693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.461741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.647 [2024-11-19 08:41:22.461749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:20:00.647 [2024-11-19 08:41:22.461756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.461783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.461791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.647 [2024-11-19 08:41:22.461797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:00.647 [2024-11-19 08:41:22.461804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.461826] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.647 [2024-11-19 08:41:22.461845] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.647 [2024-11-19 08:41:22.461884] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.647 [2024-11-19 08:41:22.461901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.647 [2024-11-19 08:41:22.461980] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.647 [2024-11-19 08:41:22.461989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.647 [2024-11-19 08:41:22.461999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.647 [2024-11-19 08:41:22.462011] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.647 [2024-11-19 08:41:22.462019] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.647 [2024-11-19 08:41:22.462033] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:00.647 [2024-11-19 08:41:22.462040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.647 [2024-11-19 08:41:22.462047] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.647 [2024-11-19 08:41:22.462059] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.647 [2024-11-19 08:41:22.462066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.462073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.647 [2024-11-19 08:41:22.462080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:00.647 [2024-11-19 08:41:22.462087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.462152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.647 [2024-11-19 08:41:22.462168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.647 [2024-11-19 08:41:22.462176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:00.647 [2024-11-19 08:41:22.462182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.647 [2024-11-19 08:41:22.462270] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.647 [2024-11-19 08:41:22.462287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.647 [2024-11-19 08:41:22.462294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.647 [2024-11-19 08:41:22.462301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.647 [2024-11-19 08:41:22.462308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.647 [2024-11-19 08:41:22.462315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.647 [2024-11-19 08:41:22.462320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:00.647 [2024-11-19 08:41:22.462327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.647 [2024-11-19 08:41:22.462335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:00.647 [2024-11-19 08:41:22.462341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.647 [2024-11-19 08:41:22.462347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.647 [2024-11-19 08:41:22.462357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:00.647 [2024-11-19 08:41:22.462363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.647 [2024-11-19 08:41:22.462369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.647 [2024-11-19 08:41:22.462377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:00.647 [2024-11-19 08:41:22.462383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.647 [2024-11-19 08:41:22.462389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.647 [2024-11-19 08:41:22.462395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:00.647 [2024-11-19 08:41:22.462402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.647 [2024-11-19 08:41:22.462408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.647 [2024-11-19 08:41:22.462414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:00.647 [2024-11-19 08:41:22.462420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.647 [2024-11-19 08:41:22.462426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.647 [2024-11-19 08:41:22.462432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.648 [2024-11-19 08:41:22.462445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.648 [2024-11-19 08:41:22.462450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.648 [2024-11-19 08:41:22.462467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.648 [2024-11-19 08:41:22.462473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.648 [2024-11-19 08:41:22.462485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.648 [2024-11-19 08:41:22.462491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.648 [2024-11-19 08:41:22.462502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.648 [2024-11-19 08:41:22.462508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:00.648 [2024-11-19 08:41:22.462514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.648 [2024-11-19 08:41:22.462520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.648 [2024-11-19 08:41:22.462526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:00.648 [2024-11-19 08:41:22.462531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.648 [2024-11-19 08:41:22.462543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:00.648 [2024-11-19 08:41:22.462550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462559] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.648 [2024-11-19 08:41:22.462567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.648 [2024-11-19 08:41:22.462575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.648 [2024-11-19 08:41:22.462582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.648 [2024-11-19 08:41:22.462588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.648 [2024-11-19 08:41:22.462593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.648 [2024-11-19 08:41:22.462599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.648 [2024-11-19 08:41:22.462606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.648 [2024-11-19 08:41:22.462611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.648 [2024-11-19 08:41:22.462616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.648 [2024-11-19 08:41:22.462624] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.648 [2024-11-19 08:41:22.462632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:00.648 [2024-11-19 08:41:22.462655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:00.648 [2024-11-19 08:41:22.462662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:00.648 [2024-11-19 08:41:22.462669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:00.648 [2024-11-19 08:41:22.462678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:00.648 [2024-11-19 08:41:22.462686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:00.648 [2024-11-19 08:41:22.462692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:00.648 [2024-11-19 08:41:22.462699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:00.648 [2024-11-19 08:41:22.462706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:00.648 [2024-11-19 08:41:22.462712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:00.648 [2024-11-19 08:41:22.462765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.648 [2024-11-19 08:41:22.462779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.648 [2024-11-19 08:41:22.462793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.648 [2024-11-19 08:41:22.462800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.648 [2024-11-19 08:41:22.462809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.648 [2024-11-19 08:41:22.462819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.462826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.648 [2024-11-19 08:41:22.462833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:20:00.648 [2024-11-19 08:41:22.462840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.474651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.474688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.648 [2024-11-19 08:41:22.474700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.787 ms 00:20:00.648 [2024-11-19 08:41:22.474707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.474782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.474791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.648 [2024-11-19 08:41:22.474798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:00.648 [2024-11-19 08:41:22.474804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.492327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.492370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.648 [2024-11-19 08:41:22.492383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.506 ms 00:20:00.648 [2024-11-19 08:41:22.492392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.492429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.492439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.648 [2024-11-19 08:41:22.492449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.648 [2024-11-19 08:41:22.492470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.492981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.493012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.648 [2024-11-19 08:41:22.493022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:20:00.648 [2024-11-19 08:41:22.493031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.493155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.493176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.648 [2024-11-19 08:41:22.493186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:00.648 [2024-11-19 08:41:22.493194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.500082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.500124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.648 [2024-11-19 08:41:22.500136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:20:00.648 [2024-11-19 08:41:22.500145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.502839] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:00.648 [2024-11-19 08:41:22.502875] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.648 [2024-11-19 08:41:22.502886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.502909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.648 [2024-11-19 08:41:22.502917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:20:00.648 [2024-11-19 08:41:22.502924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.515208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.515243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.648 [2024-11-19 08:41:22.515258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:20:00.648 [2024-11-19 08:41:22.515264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.516920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.516954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.648 [2024-11-19 08:41:22.516963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:20:00.648 [2024-11-19 08:41:22.516970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.648 [2024-11-19 08:41:22.518431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.648 [2024-11-19 08:41:22.518464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.648 [2024-11-19 08:41:22.518473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:20:00.648 [2024-11-19 08:41:22.518480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.518752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.518774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.649 [2024-11-19 08:41:22.518784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:20:00.649 [2024-11-19 08:41:22.518791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.538629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.538702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.649 [2024-11-19 08:41:22.538750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.856 ms 00:20:00.649 [2024-11-19 08:41:22.538758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.544414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:00.649 [2024-11-19 08:41:22.547046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.547081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.649 [2024-11-19 08:41:22.547090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.259 ms 00:20:00.649 [2024-11-19 08:41:22.547119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.547173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.547182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.649 [2024-11-19 08:41:22.547190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.649 [2024-11-19 08:41:22.547205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.547304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.547324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.649 [2024-11-19 08:41:22.547332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:00.649 [2024-11-19 08:41:22.547346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.547374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.547384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.649 [2024-11-19 08:41:22.547393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:00.649 [2024-11-19 08:41:22.547400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.649 [2024-11-19 08:41:22.547431] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.649 [2024-11-19 08:41:22.547445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.649 [2024-11-19 08:41:22.547453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.649 [2024-11-19 08:41:22.547460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:00.649 [2024-11-19 08:41:22.547468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.909 [2024-11-19 08:41:22.551081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.909 [2024-11-19 08:41:22.551120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.909 [2024-11-19 08:41:22.551129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:20:00.909 [2024-11-19 08:41:22.551153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.909 [2024-11-19 08:41:22.551217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.909 [2024-11-19 08:41:22.551228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.909 [2024-11-19 08:41:22.551236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:00.909 [2024-11-19 08:41:22.551243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.909 [2024-11-19 08:41:22.552281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.245 ms, result 0 00:20:01.850  [2024-11-19T08:41:24.698Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-19T08:41:25.638Z] Copying: 50/1024 [MB] (25 MBps) [2024-11-19T08:41:26.578Z] Copying: 74/1024 [MB] (24 MBps) [2024-11-19T08:41:27.961Z] Copying: 100/1024 [MB] (25 MBps) [2024-11-19T08:41:28.899Z] Copying: 127/1024 [MB] (26 MBps) [2024-11-19T08:41:29.839Z] Copying: 153/1024 [MB] (25 MBps) [2024-11-19T08:41:30.779Z] Copying: 178/1024 [MB] (24 MBps) [2024-11-19T08:41:31.720Z] Copying: 202/1024 [MB] (24 MBps) [2024-11-19T08:41:32.660Z] Copying: 227/1024 [MB] (25 MBps) [2024-11-19T08:41:33.599Z] Copying: 251/1024 [MB] (24 MBps) [2024-11-19T08:41:34.981Z] Copying: 277/1024 [MB] (25 MBps) [2024-11-19T08:41:35.551Z] Copying: 301/1024 [MB] (24 MBps) [2024-11-19T08:41:36.940Z] Copying: 326/1024 [MB] (25 MBps) [2024-11-19T08:41:37.562Z] Copying: 351/1024 [MB] (24 MBps) [2024-11-19T08:41:38.945Z] Copying: 375/1024 [MB] (24 MBps) [2024-11-19T08:41:39.885Z] Copying: 400/1024 [MB] (25 MBps) [2024-11-19T08:41:40.825Z] Copying: 425/1024 [MB] (24 MBps) [2024-11-19T08:41:41.766Z] Copying: 450/1024 [MB] (24 MBps) [2024-11-19T08:41:42.707Z] Copying: 474/1024 [MB] (24 MBps) [2024-11-19T08:41:43.647Z] Copying: 498/1024 [MB] (24 MBps) [2024-11-19T08:41:44.587Z] Copying: 522/1024 [MB] (23 MBps) [2024-11-19T08:41:45.527Z] Copying: 545/1024 [MB] (23 MBps) [2024-11-19T08:41:46.911Z] Copying: 570/1024 [MB] (25 MBps) [2024-11-19T08:41:47.851Z] Copying: 595/1024 [MB] (24 MBps) [2024-11-19T08:41:48.792Z] Copying: 619/1024 [MB] (23 MBps) [2024-11-19T08:41:49.746Z] Copying: 643/1024 [MB] (24 MBps) [2024-11-19T08:41:50.686Z] Copying: 669/1024 [MB] (25 MBps) [2024-11-19T08:41:51.625Z] Copying: 694/1024 [MB] (25 MBps) [2024-11-19T08:41:52.565Z] Copying: 719/1024 [MB] (24 MBps) [2024-11-19T08:41:53.946Z] Copying: 743/1024 [MB] (24 MBps) [2024-11-19T08:41:54.517Z] Copying: 767/1024 [MB] (23 MBps) [2024-11-19T08:41:55.899Z] Copying: 791/1024 [MB] (23 MBps) [2024-11-19T08:41:56.839Z] Copying: 815/1024 [MB] (23 MBps) [2024-11-19T08:41:57.786Z] Copying: 838/1024 [MB] (23 MBps) [2024-11-19T08:41:58.730Z] Copying: 863/1024 [MB] (24 MBps) [2024-11-19T08:41:59.672Z] Copying: 889/1024 [MB] (26 MBps) [2024-11-19T08:42:00.645Z] Copying: 914/1024 [MB] (24 MBps) [2024-11-19T08:42:01.585Z] Copying: 938/1024 [MB] (24 MBps) [2024-11-19T08:42:02.524Z] Copying: 963/1024 [MB] (24 MBps) [2024-11-19T08:42:03.906Z] Copying: 987/1024 [MB] (24 MBps) [2024-11-19T08:42:04.168Z] Copying: 1012/1024 [MB] (24 MBps) [2024-11-19T08:42:04.168Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 08:42:03.945004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.945070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.261 [2024-11-19 08:42:03.945085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:42.261 [2024-11-19 08:42:03.945093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.945124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.261 [2024-11-19 08:42:03.945828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.945841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.261 [2024-11-19 08:42:03.945856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:20:42.261 [2024-11-19 08:42:03.945864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.947953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.947991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.261 [2024-11-19 08:42:03.948000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:20:42.261 [2024-11-19 08:42:03.948019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.964083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.964127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.261 [2024-11-19 08:42:03.964137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.077 ms 00:20:42.261 [2024-11-19 08:42:03.964144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.968751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.968782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.261 [2024-11-19 08:42:03.968807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:20:42.261 [2024-11-19 08:42:03.968814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.970277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.970311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.261 [2024-11-19 08:42:03.970320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:20:42.261 [2024-11-19 08:42:03.970326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.975123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.975164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.261 [2024-11-19 08:42:03.975173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.781 ms 00:20:42.261 [2024-11-19 08:42:03.975179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.975286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.975295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.261 [2024-11-19 08:42:03.975305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:42.261 [2024-11-19 08:42:03.975312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.977739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.977786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:42.261 [2024-11-19 08:42:03.977795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:20:42.261 [2024-11-19 08:42:03.977801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.979170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.979203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:42.261 [2024-11-19 08:42:03.979212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:20:42.261 [2024-11-19 08:42:03.979219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.980439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.980471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.261 [2024-11-19 08:42:03.980479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:20:42.261 [2024-11-19 08:42:03.980485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.981583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.261 [2024-11-19 08:42:03.981618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.261 [2024-11-19 08:42:03.981626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:20:42.261 [2024-11-19 08:42:03.981632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.261 [2024-11-19 08:42:03.981654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.261 [2024-11-19 08:42:03.981668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.981994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.262 [2024-11-19 08:42:03.982347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.263 [2024-11-19 08:42:03.982434] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.263 [2024-11-19 08:42:03.982441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da080e19-877f-4306-b925-6630d39cb4d4 00:20:42.263 [2024-11-19 08:42:03.982449] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.263 [2024-11-19 08:42:03.982455] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.263 [2024-11-19 08:42:03.982462] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.263 [2024-11-19 08:42:03.982469] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.263 [2024-11-19 08:42:03.982475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.263 [2024-11-19 08:42:03.982482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.263 [2024-11-19 08:42:03.982489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.263 [2024-11-19 08:42:03.982494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.263 [2024-11-19 08:42:03.982500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.263 [2024-11-19 08:42:03.982507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.263 [2024-11-19 08:42:03.982514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.263 [2024-11-19 08:42:03.982522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:20:42.263 [2024-11-19 08:42:03.982544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:03.984214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.263 [2024-11-19 08:42:03.984234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.263 [2024-11-19 08:42:03.984242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:20:42.263 [2024-11-19 08:42:03.984249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:03.984353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.263 [2024-11-19 08:42:03.984363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.263 [2024-11-19 08:42:03.984380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:42.263 [2024-11-19 08:42:03.984387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:03.990201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:03.990230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.263 [2024-11-19 08:42:03.990238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:03.990244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:03.990290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:03.990298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.263 [2024-11-19 08:42:03.990310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:03.990317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:03.990353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:03.990363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.263 [2024-11-19 08:42:03.990370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:03.990377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:03.990398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:03.990406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.263 [2024-11-19 08:42:03.990412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:03.990427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.003757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.003808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.263 [2024-11-19 08:42:04.003829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.003837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.011815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.011856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.263 [2024-11-19 08:42:04.011872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.011879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.011923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.011931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.263 [2024-11-19 08:42:04.011939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.011945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.011966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.011974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.263 [2024-11-19 08:42:04.011992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.011999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.012072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.012082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.263 [2024-11-19 08:42:04.012090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.012096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.012128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.012138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.263 [2024-11-19 08:42:04.012146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.012153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.012200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.012208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.263 [2024-11-19 08:42:04.012215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.012222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.012261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.263 [2024-11-19 08:42:04.012269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.263 [2024-11-19 08:42:04.012276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.263 [2024-11-19 08:42:04.012283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.263 [2024-11-19 08:42:04.012398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.489 ms, result 0 00:20:42.834 00:20:42.834 00:20:42.834 08:42:04 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:42.834 [2024-11-19 08:42:04.517676] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:42.834 [2024-11-19 08:42:04.517806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87620 ] 00:20:42.834 [2024-11-19 08:42:04.673773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.834 [2024-11-19 08:42:04.700143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.095 [2024-11-19 08:42:04.802262] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.095 [2024-11-19 08:42:04.802336] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.095 [2024-11-19 08:42:04.956992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.095 [2024-11-19 08:42:04.957038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:43.095 [2024-11-19 08:42:04.957053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:43.095 [2024-11-19 08:42:04.957076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.095 [2024-11-19 08:42:04.957132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.095 [2024-11-19 08:42:04.957143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.095 [2024-11-19 08:42:04.957150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:43.095 [2024-11-19 08:42:04.957165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.095 [2024-11-19 08:42:04.957190] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:43.095 [2024-11-19 08:42:04.957385] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:43.095 [2024-11-19 08:42:04.957410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.095 [2024-11-19 08:42:04.957418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.095 [2024-11-19 08:42:04.957426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:20:43.095 [2024-11-19 08:42:04.957441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.095 [2024-11-19 08:42:04.958804] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:43.095 [2024-11-19 08:42:04.961198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.095 [2024-11-19 08:42:04.961233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:43.095 [2024-11-19 08:42:04.961243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:20:43.095 [2024-11-19 08:42:04.961250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.095 [2024-11-19 08:42:04.961322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.095 [2024-11-19 08:42:04.961333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:43.095 [2024-11-19 08:42:04.961341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:43.095 [2024-11-19 08:42:04.961349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.095 [2024-11-19 08:42:04.967968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.095 [2024-11-19 08:42:04.968001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.095 [2024-11-19 08:42:04.968010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.588 ms 00:20:43.096 [2024-11-19 08:42:04.968019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.968096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.096 [2024-11-19 08:42:04.968106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.096 [2024-11-19 08:42:04.968114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:43.096 [2024-11-19 08:42:04.968121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.968170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.096 [2024-11-19 08:42:04.968180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:43.096 [2024-11-19 08:42:04.968188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:43.096 [2024-11-19 08:42:04.968195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.968218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.096 [2024-11-19 08:42:04.969811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.096 [2024-11-19 08:42:04.969849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.096 [2024-11-19 08:42:04.969857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:20:43.096 [2024-11-19 08:42:04.969864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.969891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.096 [2024-11-19 08:42:04.969899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:43.096 [2024-11-19 08:42:04.969906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:43.096 [2024-11-19 08:42:04.969920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.969942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:43.096 [2024-11-19 08:42:04.969969] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:43.096 [2024-11-19 08:42:04.970004] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:43.096 [2024-11-19 08:42:04.970020] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:43.096 [2024-11-19 08:42:04.970099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:43.096 [2024-11-19 08:42:04.970109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:43.096 [2024-11-19 08:42:04.970124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:43.096 [2024-11-19 08:42:04.970137] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970161] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:43.096 [2024-11-19 08:42:04.970169] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:43.096 [2024-11-19 08:42:04.970176] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:43.096 [2024-11-19 08:42:04.970183] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:43.096 [2024-11-19 08:42:04.970190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.096 [2024-11-19 08:42:04.970197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:43.096 [2024-11-19 08:42:04.970204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:20:43.096 [2024-11-19 08:42:04.970210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.970274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.096 [2024-11-19 08:42:04.970284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:43.096 [2024-11-19 08:42:04.970290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:43.096 [2024-11-19 08:42:04.970297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.096 [2024-11-19 08:42:04.970382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:43.096 [2024-11-19 08:42:04.970394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:43.096 [2024-11-19 08:42:04.970402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:43.096 [2024-11-19 08:42:04.970423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:43.096 [2024-11-19 08:42:04.970455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.096 [2024-11-19 08:42:04.970468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:43.096 [2024-11-19 08:42:04.970477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:43.096 [2024-11-19 08:42:04.970485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.096 [2024-11-19 08:42:04.970491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:43.096 [2024-11-19 08:42:04.970498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:43.096 [2024-11-19 08:42:04.970504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:43.096 [2024-11-19 08:42:04.970516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:43.096 [2024-11-19 08:42:04.970534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:43.096 [2024-11-19 08:42:04.970554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:43.096 [2024-11-19 08:42:04.970571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:43.096 [2024-11-19 08:42:04.970597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:43.096 [2024-11-19 08:42:04.970615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.096 [2024-11-19 08:42:04.970628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:43.096 [2024-11-19 08:42:04.970633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:43.096 [2024-11-19 08:42:04.970639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.096 [2024-11-19 08:42:04.970644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:43.096 [2024-11-19 08:42:04.970650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:43.096 [2024-11-19 08:42:04.970656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:43.096 [2024-11-19 08:42:04.970667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:43.096 [2024-11-19 08:42:04.970673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970681] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:43.096 [2024-11-19 08:42:04.970688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:43.096 [2024-11-19 08:42:04.970696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.096 [2024-11-19 08:42:04.970709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:43.096 [2024-11-19 08:42:04.970715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:43.096 [2024-11-19 08:42:04.970733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:43.096 [2024-11-19 08:42:04.970739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:43.096 [2024-11-19 08:42:04.970745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:43.096 [2024-11-19 08:42:04.970751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:43.096 [2024-11-19 08:42:04.970759] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:43.096 [2024-11-19 08:42:04.970768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.096 [2024-11-19 08:42:04.970776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:43.096 [2024-11-19 08:42:04.970783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:43.096 [2024-11-19 08:42:04.970790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:43.096 [2024-11-19 08:42:04.970797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:43.096 [2024-11-19 08:42:04.970805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:43.096 [2024-11-19 08:42:04.970827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:43.096 [2024-11-19 08:42:04.970834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:43.097 [2024-11-19 08:42:04.970841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:43.097 [2024-11-19 08:42:04.970847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:43.097 [2024-11-19 08:42:04.970854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:43.097 [2024-11-19 08:42:04.970860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:43.097 [2024-11-19 08:42:04.970876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:43.097 [2024-11-19 08:42:04.970883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:43.097 [2024-11-19 08:42:04.970890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:43.097 [2024-11-19 08:42:04.970896] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:43.097 [2024-11-19 08:42:04.970909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.097 [2024-11-19 08:42:04.970916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:43.097 [2024-11-19 08:42:04.970922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:43.097 [2024-11-19 08:42:04.970928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:43.097 [2024-11-19 08:42:04.970935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:43.097 [2024-11-19 08:42:04.970945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.097 [2024-11-19 08:42:04.970953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:43.097 [2024-11-19 08:42:04.970966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:20:43.097 [2024-11-19 08:42:04.970978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.097 [2024-11-19 08:42:04.982739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.097 [2024-11-19 08:42:04.982780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.097 [2024-11-19 08:42:04.982791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.737 ms 00:20:43.097 [2024-11-19 08:42:04.982798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.097 [2024-11-19 08:42:04.982868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.097 [2024-11-19 08:42:04.982876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:43.097 [2024-11-19 08:42:04.982883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:43.097 [2024-11-19 08:42:04.982910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.008338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.008450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.358 [2024-11-19 08:42:05.008529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.415 ms 00:20:43.358 [2024-11-19 08:42:05.008590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.008799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.008854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.358 [2024-11-19 08:42:05.008888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.358 [2024-11-19 08:42:05.008948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.009898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.009976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.358 [2024-11-19 08:42:05.010011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:20:43.358 [2024-11-19 08:42:05.010070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.010457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.010528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.358 [2024-11-19 08:42:05.010561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:43.358 [2024-11-19 08:42:05.010589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.021144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.021234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.358 [2024-11-19 08:42:05.021259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.508 ms 00:20:43.358 [2024-11-19 08:42:05.021277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.024778] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:43.358 [2024-11-19 08:42:05.024826] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:43.358 [2024-11-19 08:42:05.024844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.024857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:43.358 [2024-11-19 08:42:05.024870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.330 ms 00:20:43.358 [2024-11-19 08:42:05.024881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.042476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.042518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:43.358 [2024-11-19 08:42:05.042531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:20:43.358 [2024-11-19 08:42:05.042539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.044432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.044464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:43.358 [2024-11-19 08:42:05.044472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:20:43.358 [2024-11-19 08:42:05.044478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.046052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.046084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:43.358 [2024-11-19 08:42:05.046095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:20:43.358 [2024-11-19 08:42:05.046101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.046418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.046450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.358 [2024-11-19 08:42:05.046470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:43.358 [2024-11-19 08:42:05.046478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.066520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.066594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:43.358 [2024-11-19 08:42:05.066608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.052 ms 00:20:43.358 [2024-11-19 08:42:05.066616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.072302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:43.358 [2024-11-19 08:42:05.074934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.074965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.358 [2024-11-19 08:42:05.074975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.288 ms 00:20:43.358 [2024-11-19 08:42:05.074992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.075045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.075054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:43.358 [2024-11-19 08:42:05.075062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:43.358 [2024-11-19 08:42:05.075069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.075158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.075168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.358 [2024-11-19 08:42:05.075180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:43.358 [2024-11-19 08:42:05.075186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-19 08:42:05.075209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-19 08:42:05.075219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.358 [2024-11-19 08:42:05.075227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:43.358 [2024-11-19 08:42:05.075233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-19 08:42:05.075265] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:43.359 [2024-11-19 08:42:05.075276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-19 08:42:05.075284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:43.359 [2024-11-19 08:42:05.075291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:43.359 [2024-11-19 08:42:05.075301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-19 08:42:05.079028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-19 08:42:05.079067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.359 [2024-11-19 08:42:05.079077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:20:43.359 [2024-11-19 08:42:05.079084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-19 08:42:05.079148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-19 08:42:05.079157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.359 [2024-11-19 08:42:05.079165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:43.359 [2024-11-19 08:42:05.079174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-19 08:42:05.080175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.027 ms, result 0 00:20:44.742  [2024-11-19T08:42:07.588Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-19T08:42:08.528Z] Copying: 54/1024 [MB] (26 MBps) [2024-11-19T08:42:09.468Z] Copying: 81/1024 [MB] (27 MBps) [2024-11-19T08:42:10.407Z] Copying: 108/1024 [MB] (26 MBps) [2024-11-19T08:42:11.347Z] Copying: 135/1024 [MB] (27 MBps) [2024-11-19T08:42:12.286Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-19T08:42:13.227Z] Copying: 190/1024 [MB] (27 MBps) [2024-11-19T08:42:14.608Z] Copying: 217/1024 [MB] (26 MBps) [2024-11-19T08:42:15.547Z] Copying: 244/1024 [MB] (27 MBps) [2024-11-19T08:42:16.486Z] Copying: 271/1024 [MB] (27 MBps) [2024-11-19T08:42:17.425Z] Copying: 297/1024 [MB] (26 MBps) [2024-11-19T08:42:18.363Z] Copying: 323/1024 [MB] (26 MBps) [2024-11-19T08:42:19.303Z] Copying: 351/1024 [MB] (27 MBps) [2024-11-19T08:42:20.243Z] Copying: 378/1024 [MB] (27 MBps) [2024-11-19T08:42:21.629Z] Copying: 405/1024 [MB] (26 MBps) [2024-11-19T08:42:22.212Z] Copying: 432/1024 [MB] (27 MBps) [2024-11-19T08:42:23.594Z] Copying: 460/1024 [MB] (27 MBps) [2024-11-19T08:42:24.535Z] Copying: 486/1024 [MB] (26 MBps) [2024-11-19T08:42:25.474Z] Copying: 513/1024 [MB] (26 MBps) [2024-11-19T08:42:26.416Z] Copying: 541/1024 [MB] (27 MBps) [2024-11-19T08:42:27.356Z] Copying: 569/1024 [MB] (27 MBps) [2024-11-19T08:42:28.296Z] Copying: 596/1024 [MB] (27 MBps) [2024-11-19T08:42:29.235Z] Copying: 622/1024 [MB] (25 MBps) [2024-11-19T08:42:30.617Z] Copying: 649/1024 [MB] (26 MBps) [2024-11-19T08:42:31.188Z] Copying: 676/1024 [MB] (26 MBps) [2024-11-19T08:42:32.571Z] Copying: 702/1024 [MB] (26 MBps) [2024-11-19T08:42:33.511Z] Copying: 729/1024 [MB] (26 MBps) [2024-11-19T08:42:34.451Z] Copying: 756/1024 [MB] (27 MBps) [2024-11-19T08:42:35.393Z] Copying: 782/1024 [MB] (26 MBps) [2024-11-19T08:42:36.332Z] Copying: 810/1024 [MB] (27 MBps) [2024-11-19T08:42:37.272Z] Copying: 836/1024 [MB] (26 MBps) [2024-11-19T08:42:38.212Z] Copying: 863/1024 [MB] (26 MBps) [2024-11-19T08:42:39.594Z] Copying: 889/1024 [MB] (26 MBps) [2024-11-19T08:42:40.164Z] Copying: 915/1024 [MB] (26 MBps) [2024-11-19T08:42:41.547Z] Copying: 942/1024 [MB] (27 MBps) [2024-11-19T08:42:42.487Z] Copying: 969/1024 [MB] (26 MBps) [2024-11-19T08:42:43.428Z] Copying: 996/1024 [MB] (27 MBps) [2024-11-19T08:42:44.369Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-19 08:42:44.267899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.267974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.462 [2024-11-19 08:42:44.267991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.462 [2024-11-19 08:42:44.268005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.268036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.462 [2024-11-19 08:42:44.269132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.269180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.462 [2024-11-19 08:42:44.269209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:21:22.462 [2024-11-19 08:42:44.269229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.269452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.269491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.462 [2024-11-19 08:42:44.269517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:21:22.462 [2024-11-19 08:42:44.269541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.272052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.272107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.462 [2024-11-19 08:42:44.272131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:21:22.462 [2024-11-19 08:42:44.272173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.277358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.277452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.462 [2024-11-19 08:42:44.277502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:21:22.462 [2024-11-19 08:42:44.277522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.279201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.279286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.462 [2024-11-19 08:42:44.279319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:21:22.462 [2024-11-19 08:42:44.279340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.284978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.462 [2024-11-19 08:42:44.285063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.462 [2024-11-19 08:42:44.285095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.587 ms 00:21:22.462 [2024-11-19 08:42:44.285116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.462 [2024-11-19 08:42:44.285255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.463 [2024-11-19 08:42:44.285297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.463 [2024-11-19 08:42:44.285327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:22.463 [2024-11-19 08:42:44.285348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.463 [2024-11-19 08:42:44.287993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.463 [2024-11-19 08:42:44.288069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.463 [2024-11-19 08:42:44.288104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:21:22.463 [2024-11-19 08:42:44.288123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.463 [2024-11-19 08:42:44.290133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.463 [2024-11-19 08:42:44.290170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.463 [2024-11-19 08:42:44.290180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.957 ms 00:21:22.463 [2024-11-19 08:42:44.290187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.463 [2024-11-19 08:42:44.291573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.463 [2024-11-19 08:42:44.291611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.463 [2024-11-19 08:42:44.291620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:21:22.463 [2024-11-19 08:42:44.291628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.463 [2024-11-19 08:42:44.293157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.463 [2024-11-19 08:42:44.293194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.463 [2024-11-19 08:42:44.293203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:21:22.463 [2024-11-19 08:42:44.293210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.463 [2024-11-19 08:42:44.293236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.463 [2024-11-19 08:42:44.293250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.463 [2024-11-19 08:42:44.293670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.293993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.464 [2024-11-19 08:42:44.294073] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.464 [2024-11-19 08:42:44.294089] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da080e19-877f-4306-b925-6630d39cb4d4 00:21:22.464 [2024-11-19 08:42:44.294097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:22.464 [2024-11-19 08:42:44.294103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:22.464 [2024-11-19 08:42:44.294118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:22.464 [2024-11-19 08:42:44.294126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:22.464 [2024-11-19 08:42:44.294133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.464 [2024-11-19 08:42:44.294142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.464 [2024-11-19 08:42:44.294160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.464 [2024-11-19 08:42:44.294165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.464 [2024-11-19 08:42:44.294172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.464 [2024-11-19 08:42:44.294179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.464 [2024-11-19 08:42:44.294199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.464 [2024-11-19 08:42:44.294208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:21:22.464 [2024-11-19 08:42:44.294215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.296404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.464 [2024-11-19 08:42:44.296434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.464 [2024-11-19 08:42:44.296443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:21:22.464 [2024-11-19 08:42:44.296456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.296581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.464 [2024-11-19 08:42:44.296591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.464 [2024-11-19 08:42:44.296598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:22.464 [2024-11-19 08:42:44.296605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.304073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.464 [2024-11-19 08:42:44.304102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.464 [2024-11-19 08:42:44.304111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.464 [2024-11-19 08:42:44.304121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.304176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.464 [2024-11-19 08:42:44.304184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.464 [2024-11-19 08:42:44.304192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.464 [2024-11-19 08:42:44.304198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.304260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.464 [2024-11-19 08:42:44.304277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.464 [2024-11-19 08:42:44.304285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.464 [2024-11-19 08:42:44.304292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.304308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.464 [2024-11-19 08:42:44.304320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.464 [2024-11-19 08:42:44.304327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.464 [2024-11-19 08:42:44.304334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.320673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.464 [2024-11-19 08:42:44.320741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.464 [2024-11-19 08:42:44.320753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.464 [2024-11-19 08:42:44.320780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.464 [2024-11-19 08:42:44.330401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.464 [2024-11-19 08:42:44.330440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.465 [2024-11-19 08:42:44.330451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.330459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.330512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.465 [2024-11-19 08:42:44.330522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.465 [2024-11-19 08:42:44.330529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.330537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.330560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.465 [2024-11-19 08:42:44.330568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.465 [2024-11-19 08:42:44.330581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.330589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.330668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.465 [2024-11-19 08:42:44.330680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.465 [2024-11-19 08:42:44.330694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.330701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.330750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.465 [2024-11-19 08:42:44.330762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.465 [2024-11-19 08:42:44.330770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.330781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.331105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.465 [2024-11-19 08:42:44.331124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.465 [2024-11-19 08:42:44.331132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.331139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.331195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.465 [2024-11-19 08:42:44.331206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.465 [2024-11-19 08:42:44.331217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.465 [2024-11-19 08:42:44.331225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.465 [2024-11-19 08:42:44.331349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.538 ms, result 0 00:21:22.725 00:21:22.725 00:21:22.725 08:42:44 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:24.681 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:24.681 08:42:46 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:24.681 [2024-11-19 08:42:46.264442] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:21:24.681 [2024-11-19 08:42:46.264653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88050 ] 00:21:24.681 [2024-11-19 08:42:46.418312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.681 [2024-11-19 08:42:46.444987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.681 [2024-11-19 08:42:46.547431] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.681 [2024-11-19 08:42:46.547498] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.943 [2024-11-19 08:42:46.700776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.700825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.943 [2024-11-19 08:42:46.700840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:24.943 [2024-11-19 08:42:46.700847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.700888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.700903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.943 [2024-11-19 08:42:46.700911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:24.943 [2024-11-19 08:42:46.700923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.700944] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.943 [2024-11-19 08:42:46.701150] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.943 [2024-11-19 08:42:46.701180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.701188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.943 [2024-11-19 08:42:46.701196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:21:24.943 [2024-11-19 08:42:46.701205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.702572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.943 [2024-11-19 08:42:46.704963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.704994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.943 [2024-11-19 08:42:46.705004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:21:24.943 [2024-11-19 08:42:46.705011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.705065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.705077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.943 [2024-11-19 08:42:46.705085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:24.943 [2024-11-19 08:42:46.705092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.711739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.711767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.943 [2024-11-19 08:42:46.711783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.617 ms 00:21:24.943 [2024-11-19 08:42:46.711793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.711869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.711879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.943 [2024-11-19 08:42:46.711887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:24.943 [2024-11-19 08:42:46.711894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.711942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.711957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.943 [2024-11-19 08:42:46.711973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.943 [2024-11-19 08:42:46.711980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.712006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.943 [2024-11-19 08:42:46.713580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.713606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.943 [2024-11-19 08:42:46.713614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:21:24.943 [2024-11-19 08:42:46.713622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.943 [2024-11-19 08:42:46.713648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.943 [2024-11-19 08:42:46.713657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.944 [2024-11-19 08:42:46.713664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.944 [2024-11-19 08:42:46.713671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.944 [2024-11-19 08:42:46.713693] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.944 [2024-11-19 08:42:46.713737] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.944 [2024-11-19 08:42:46.713774] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.944 [2024-11-19 08:42:46.713795] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:24.944 [2024-11-19 08:42:46.713875] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.944 [2024-11-19 08:42:46.713892] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.944 [2024-11-19 08:42:46.713902] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:24.944 [2024-11-19 08:42:46.713917] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.944 [2024-11-19 08:42:46.713926] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.944 [2024-11-19 08:42:46.713933] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:24.944 [2024-11-19 08:42:46.713941] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.944 [2024-11-19 08:42:46.713948] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.944 [2024-11-19 08:42:46.713962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.944 [2024-11-19 08:42:46.713971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.944 [2024-11-19 08:42:46.713978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.944 [2024-11-19 08:42:46.713986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:21:24.944 [2024-11-19 08:42:46.713993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.944 [2024-11-19 08:42:46.714062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.944 [2024-11-19 08:42:46.714078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.944 [2024-11-19 08:42:46.714085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:24.944 [2024-11-19 08:42:46.714092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.944 [2024-11-19 08:42:46.714185] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.944 [2024-11-19 08:42:46.714203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.944 [2024-11-19 08:42:46.714210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.944 [2024-11-19 08:42:46.714234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.944 [2024-11-19 08:42:46.714254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.944 [2024-11-19 08:42:46.714267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.944 [2024-11-19 08:42:46.714276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:24.944 [2024-11-19 08:42:46.714282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.944 [2024-11-19 08:42:46.714288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.944 [2024-11-19 08:42:46.714295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:24.944 [2024-11-19 08:42:46.714301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.944 [2024-11-19 08:42:46.714314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.944 [2024-11-19 08:42:46.714332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.944 [2024-11-19 08:42:46.714350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.944 [2024-11-19 08:42:46.714368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.944 [2024-11-19 08:42:46.714393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.944 [2024-11-19 08:42:46.714412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.944 [2024-11-19 08:42:46.714425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.944 [2024-11-19 08:42:46.714431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:24.944 [2024-11-19 08:42:46.714437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.944 [2024-11-19 08:42:46.714444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.944 [2024-11-19 08:42:46.714450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:24.944 [2024-11-19 08:42:46.714456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.944 [2024-11-19 08:42:46.714469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:24.944 [2024-11-19 08:42:46.714474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714482] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.944 [2024-11-19 08:42:46.714490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.944 [2024-11-19 08:42:46.714499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.944 [2024-11-19 08:42:46.714512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.944 [2024-11-19 08:42:46.714518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.944 [2024-11-19 08:42:46.714524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.944 [2024-11-19 08:42:46.714530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.944 [2024-11-19 08:42:46.714536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.944 [2024-11-19 08:42:46.714542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.944 [2024-11-19 08:42:46.714550] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.944 [2024-11-19 08:42:46.714559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.944 [2024-11-19 08:42:46.714566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:24.944 [2024-11-19 08:42:46.714573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:24.944 [2024-11-19 08:42:46.714580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:24.944 [2024-11-19 08:42:46.714587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:24.944 [2024-11-19 08:42:46.714595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:24.944 [2024-11-19 08:42:46.714602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:24.945 [2024-11-19 08:42:46.714608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:24.945 [2024-11-19 08:42:46.714615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:24.945 [2024-11-19 08:42:46.714622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:24.945 [2024-11-19 08:42:46.714628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:24.945 [2024-11-19 08:42:46.714634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:24.945 [2024-11-19 08:42:46.714649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:24.945 [2024-11-19 08:42:46.714656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:24.945 [2024-11-19 08:42:46.714662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:24.945 [2024-11-19 08:42:46.714670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.945 [2024-11-19 08:42:46.714677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.945 [2024-11-19 08:42:46.714685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.945 [2024-11-19 08:42:46.714691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.945 [2024-11-19 08:42:46.714698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.945 [2024-11-19 08:42:46.714704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.945 [2024-11-19 08:42:46.714715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.714734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.945 [2024-11-19 08:42:46.714741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:21:24.945 [2024-11-19 08:42:46.714748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.726475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.726509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.945 [2024-11-19 08:42:46.726519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.706 ms 00:21:24.945 [2024-11-19 08:42:46.726527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.726601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.726626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.945 [2024-11-19 08:42:46.726634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:24.945 [2024-11-19 08:42:46.726642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.754253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.754321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.945 [2024-11-19 08:42:46.754350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.601 ms 00:21:24.945 [2024-11-19 08:42:46.754372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.754446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.754495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.945 [2024-11-19 08:42:46.754519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.945 [2024-11-19 08:42:46.754539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.755285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.755331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.945 [2024-11-19 08:42:46.755355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:21:24.945 [2024-11-19 08:42:46.755377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.755656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.755700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.945 [2024-11-19 08:42:46.755750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:24.945 [2024-11-19 08:42:46.755773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.765872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.765921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.945 [2024-11-19 08:42:46.765940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.065 ms 00:21:24.945 [2024-11-19 08:42:46.765954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.769360] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:24.945 [2024-11-19 08:42:46.769410] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.945 [2024-11-19 08:42:46.769432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.769447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.945 [2024-11-19 08:42:46.769461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:21:24.945 [2024-11-19 08:42:46.769475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.786838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.786874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.945 [2024-11-19 08:42:46.786894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.318 ms 00:21:24.945 [2024-11-19 08:42:46.786907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.788547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.788577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.945 [2024-11-19 08:42:46.788587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:21:24.945 [2024-11-19 08:42:46.788594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.790132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.790160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.945 [2024-11-19 08:42:46.790170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:21:24.945 [2024-11-19 08:42:46.790177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.790436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.790457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.945 [2024-11-19 08:42:46.790466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:21:24.945 [2024-11-19 08:42:46.790474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.810479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.810544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.945 [2024-11-19 08:42:46.810558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.024 ms 00:21:24.945 [2024-11-19 08:42:46.810566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.816469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.945 [2024-11-19 08:42:46.818743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.818772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.945 [2024-11-19 08:42:46.818782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.151 ms 00:21:24.945 [2024-11-19 08:42:46.818793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.818848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.818858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.945 [2024-11-19 08:42:46.818867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.945 [2024-11-19 08:42:46.818874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.818956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.818967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.945 [2024-11-19 08:42:46.818975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:24.945 [2024-11-19 08:42:46.818985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.945 [2024-11-19 08:42:46.819009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.945 [2024-11-19 08:42:46.819019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.946 [2024-11-19 08:42:46.819027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:24.946 [2024-11-19 08:42:46.819034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.946 [2024-11-19 08:42:46.819065] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.946 [2024-11-19 08:42:46.819076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.946 [2024-11-19 08:42:46.819085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.946 [2024-11-19 08:42:46.819093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:24.946 [2024-11-19 08:42:46.819102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.946 [2024-11-19 08:42:46.822988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.946 [2024-11-19 08:42:46.823020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.946 [2024-11-19 08:42:46.823029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:21:24.946 [2024-11-19 08:42:46.823037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.946 [2024-11-19 08:42:46.823102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.946 [2024-11-19 08:42:46.823112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.946 [2024-11-19 08:42:46.823127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:24.946 [2024-11-19 08:42:46.823134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.946 [2024-11-19 08:42:46.824161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.212 ms, result 0 00:21:26.328  [2024-11-19T08:42:49.176Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-19T08:42:50.116Z] Copying: 47/1024 [MB] (24 MBps) [2024-11-19T08:42:51.056Z] Copying: 71/1024 [MB] (24 MBps) [2024-11-19T08:42:51.995Z] Copying: 96/1024 [MB] (24 MBps) [2024-11-19T08:42:52.934Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-19T08:42:53.875Z] Copying: 145/1024 [MB] (24 MBps) [2024-11-19T08:42:55.258Z] Copying: 169/1024 [MB] (24 MBps) [2024-11-19T08:42:55.828Z] Copying: 193/1024 [MB] (23 MBps) [2024-11-19T08:42:57.210Z] Copying: 217/1024 [MB] (23 MBps) [2024-11-19T08:42:58.150Z] Copying: 241/1024 [MB] (23 MBps) [2024-11-19T08:42:59.090Z] Copying: 265/1024 [MB] (24 MBps) [2024-11-19T08:43:00.032Z] Copying: 289/1024 [MB] (23 MBps) [2024-11-19T08:43:00.972Z] Copying: 314/1024 [MB] (25 MBps) [2024-11-19T08:43:01.913Z] Copying: 339/1024 [MB] (24 MBps) [2024-11-19T08:43:02.853Z] Copying: 362/1024 [MB] (23 MBps) [2024-11-19T08:43:04.235Z] Copying: 386/1024 [MB] (24 MBps) [2024-11-19T08:43:04.806Z] Copying: 410/1024 [MB] (23 MBps) [2024-11-19T08:43:06.190Z] Copying: 434/1024 [MB] (23 MBps) [2024-11-19T08:43:07.128Z] Copying: 456/1024 [MB] (22 MBps) [2024-11-19T08:43:08.070Z] Copying: 479/1024 [MB] (23 MBps) [2024-11-19T08:43:09.061Z] Copying: 504/1024 [MB] (24 MBps) [2024-11-19T08:43:09.999Z] Copying: 528/1024 [MB] (23 MBps) [2024-11-19T08:43:10.938Z] Copying: 552/1024 [MB] (23 MBps) [2024-11-19T08:43:11.877Z] Copying: 575/1024 [MB] (23 MBps) [2024-11-19T08:43:12.817Z] Copying: 599/1024 [MB] (23 MBps) [2024-11-19T08:43:14.198Z] Copying: 623/1024 [MB] (23 MBps) [2024-11-19T08:43:15.139Z] Copying: 647/1024 [MB] (23 MBps) [2024-11-19T08:43:16.079Z] Copying: 670/1024 [MB] (23 MBps) [2024-11-19T08:43:17.019Z] Copying: 693/1024 [MB] (22 MBps) [2024-11-19T08:43:17.960Z] Copying: 716/1024 [MB] (23 MBps) [2024-11-19T08:43:18.900Z] Copying: 739/1024 [MB] (22 MBps) [2024-11-19T08:43:19.839Z] Copying: 761/1024 [MB] (21 MBps) [2024-11-19T08:43:20.777Z] Copying: 783/1024 [MB] (22 MBps) [2024-11-19T08:43:22.158Z] Copying: 806/1024 [MB] (22 MBps) [2024-11-19T08:43:23.098Z] Copying: 829/1024 [MB] (22 MBps) [2024-11-19T08:43:24.035Z] Copying: 853/1024 [MB] (24 MBps) [2024-11-19T08:43:24.973Z] Copying: 877/1024 [MB] (23 MBps) [2024-11-19T08:43:25.913Z] Copying: 900/1024 [MB] (23 MBps) [2024-11-19T08:43:26.852Z] Copying: 924/1024 [MB] (23 MBps) [2024-11-19T08:43:27.790Z] Copying: 948/1024 [MB] (23 MBps) [2024-11-19T08:43:29.171Z] Copying: 971/1024 [MB] (23 MBps) [2024-11-19T08:43:30.110Z] Copying: 995/1024 [MB] (24 MBps) [2024-11-19T08:43:31.062Z] Copying: 1019/1024 [MB] (23 MBps) [2024-11-19T08:43:31.062Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 08:43:30.708379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.155 [2024-11-19 08:43:30.708452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.155 [2024-11-19 08:43:30.708466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.155 [2024-11-19 08:43:30.708473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.155 [2024-11-19 08:43:30.709691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.155 [2024-11-19 08:43:30.711706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.155 [2024-11-19 08:43:30.711757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.155 [2024-11-19 08:43:30.711768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:22:09.155 [2024-11-19 08:43:30.711798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.155 [2024-11-19 08:43:30.722820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.155 [2024-11-19 08:43:30.722876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.155 [2024-11-19 08:43:30.722888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.781 ms 00:22:09.155 [2024-11-19 08:43:30.722895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.745365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.745410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.156 [2024-11-19 08:43:30.745424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.495 ms 00:22:09.156 [2024-11-19 08:43:30.745434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.750190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.750230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.156 [2024-11-19 08:43:30.750239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:22:09.156 [2024-11-19 08:43:30.750246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.751978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.752009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.156 [2024-11-19 08:43:30.752018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:22:09.156 [2024-11-19 08:43:30.752024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.756376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.756412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.156 [2024-11-19 08:43:30.756422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.335 ms 00:22:09.156 [2024-11-19 08:43:30.756429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.877759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.877799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.156 [2024-11-19 08:43:30.877810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.531 ms 00:22:09.156 [2024-11-19 08:43:30.877819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.880008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.880040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.156 [2024-11-19 08:43:30.880049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:22:09.156 [2024-11-19 08:43:30.880056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.881641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.881687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.156 [2024-11-19 08:43:30.881697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:22:09.156 [2024-11-19 08:43:30.881703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.882780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.882808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.156 [2024-11-19 08:43:30.882817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:22:09.156 [2024-11-19 08:43:30.882824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.883923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.156 [2024-11-19 08:43:30.883956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.156 [2024-11-19 08:43:30.883965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:22:09.156 [2024-11-19 08:43:30.883972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.156 [2024-11-19 08:43:30.883993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.156 [2024-11-19 08:43:30.884007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110336 / 261120 wr_cnt: 1 state: open 00:22:09.156 [2024-11-19 08:43:30.884020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.156 [2024-11-19 08:43:30.884442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.157 [2024-11-19 08:43:30.884781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.157 [2024-11-19 08:43:30.884788] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da080e19-877f-4306-b925-6630d39cb4d4 00:22:09.157 [2024-11-19 08:43:30.884805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110336 00:22:09.157 [2024-11-19 08:43:30.884812] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111296 00:22:09.157 [2024-11-19 08:43:30.884818] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110336 00:22:09.157 [2024-11-19 08:43:30.884834] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:22:09.157 [2024-11-19 08:43:30.884841] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.157 [2024-11-19 08:43:30.884848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.157 [2024-11-19 08:43:30.884855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.157 [2024-11-19 08:43:30.884862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.157 [2024-11-19 08:43:30.884868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.157 [2024-11-19 08:43:30.884875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.157 [2024-11-19 08:43:30.884882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.157 [2024-11-19 08:43:30.884890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:22:09.157 [2024-11-19 08:43:30.884897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.886629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.157 [2024-11-19 08:43:30.886655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.157 [2024-11-19 08:43:30.886664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:22:09.157 [2024-11-19 08:43:30.886671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.886793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.157 [2024-11-19 08:43:30.886804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.157 [2024-11-19 08:43:30.886812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:09.157 [2024-11-19 08:43:30.886819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.892552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.892582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.157 [2024-11-19 08:43:30.892591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.892599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.892650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.892659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.157 [2024-11-19 08:43:30.892666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.892673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.892713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.892742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.157 [2024-11-19 08:43:30.892749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.892756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.892770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.892778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.157 [2024-11-19 08:43:30.892791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.892813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.905770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.905812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.157 [2024-11-19 08:43:30.905822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.905830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.913712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.913776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.157 [2024-11-19 08:43:30.913787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.913794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.913840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.913854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.157 [2024-11-19 08:43:30.913862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.913869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.157 [2024-11-19 08:43:30.913889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.157 [2024-11-19 08:43:30.913906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.157 [2024-11-19 08:43:30.913915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.157 [2024-11-19 08:43:30.913922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.158 [2024-11-19 08:43:30.913991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.158 [2024-11-19 08:43:30.914001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.158 [2024-11-19 08:43:30.914012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.158 [2024-11-19 08:43:30.914019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.158 [2024-11-19 08:43:30.914048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.158 [2024-11-19 08:43:30.914057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.158 [2024-11-19 08:43:30.914064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.158 [2024-11-19 08:43:30.914070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.158 [2024-11-19 08:43:30.914106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.158 [2024-11-19 08:43:30.914114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.158 [2024-11-19 08:43:30.914124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.158 [2024-11-19 08:43:30.914133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.158 [2024-11-19 08:43:30.914171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.158 [2024-11-19 08:43:30.914180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.158 [2024-11-19 08:43:30.914188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.158 [2024-11-19 08:43:30.914195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.158 [2024-11-19 08:43:30.914312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 208.463 ms, result 0 00:22:10.156 00:22:10.156 00:22:10.156 08:43:31 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:10.156 [2024-11-19 08:43:31.907647] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:10.156 [2024-11-19 08:43:31.907783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88515 ] 00:22:10.156 [2024-11-19 08:43:32.060033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.416 [2024-11-19 08:43:32.084416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.416 [2024-11-19 08:43:32.187170] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:10.416 [2024-11-19 08:43:32.187237] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:10.678 [2024-11-19 08:43:32.341206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.341260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:10.678 [2024-11-19 08:43:32.341275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:10.678 [2024-11-19 08:43:32.341283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.678 [2024-11-19 08:43:32.341329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.341340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.678 [2024-11-19 08:43:32.341347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:10.678 [2024-11-19 08:43:32.341354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.678 [2024-11-19 08:43:32.341375] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:10.678 [2024-11-19 08:43:32.341569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:10.678 [2024-11-19 08:43:32.341612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.341620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.678 [2024-11-19 08:43:32.341635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:22:10.678 [2024-11-19 08:43:32.341644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.678 [2024-11-19 08:43:32.343012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:10.678 [2024-11-19 08:43:32.345452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.345487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:10.678 [2024-11-19 08:43:32.345498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:22:10.678 [2024-11-19 08:43:32.345506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.678 [2024-11-19 08:43:32.345559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.345571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:10.678 [2024-11-19 08:43:32.345580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:10.678 [2024-11-19 08:43:32.345596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.678 [2024-11-19 08:43:32.352285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.352315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.678 [2024-11-19 08:43:32.352323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.650 ms 00:22:10.678 [2024-11-19 08:43:32.352339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.678 [2024-11-19 08:43:32.352420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.678 [2024-11-19 08:43:32.352431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.679 [2024-11-19 08:43:32.352439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:10.679 [2024-11-19 08:43:32.352448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-19 08:43:32.352492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-19 08:43:32.352503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:10.679 [2024-11-19 08:43:32.352511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:10.679 [2024-11-19 08:43:32.352518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-19 08:43:32.352552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:10.679 [2024-11-19 08:43:32.354149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-19 08:43:32.354178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.679 [2024-11-19 08:43:32.354196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:22:10.679 [2024-11-19 08:43:32.354204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-19 08:43:32.354232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-19 08:43:32.354241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:10.679 [2024-11-19 08:43:32.354249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:10.679 [2024-11-19 08:43:32.354256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-19 08:43:32.354280] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:10.679 [2024-11-19 08:43:32.354298] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:10.679 [2024-11-19 08:43:32.354338] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:10.679 [2024-11-19 08:43:32.354353] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:10.679 [2024-11-19 08:43:32.354435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:10.679 [2024-11-19 08:43:32.354448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:10.679 [2024-11-19 08:43:32.354458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:10.679 [2024-11-19 08:43:32.354470] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354485] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354493] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:10.679 [2024-11-19 08:43:32.354501] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:10.679 [2024-11-19 08:43:32.354508] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:10.679 [2024-11-19 08:43:32.354515] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:10.679 [2024-11-19 08:43:32.354522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-19 08:43:32.354530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:10.679 [2024-11-19 08:43:32.354537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:22:10.679 [2024-11-19 08:43:32.354556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-19 08:43:32.354621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-19 08:43:32.354631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:10.679 [2024-11-19 08:43:32.354645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:10.679 [2024-11-19 08:43:32.354652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-19 08:43:32.354783] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:10.679 [2024-11-19 08:43:32.354797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:10.679 [2024-11-19 08:43:32.354805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:10.679 [2024-11-19 08:43:32.354826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:10.679 [2024-11-19 08:43:32.354850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.679 [2024-11-19 08:43:32.354867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:10.679 [2024-11-19 08:43:32.354874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:10.679 [2024-11-19 08:43:32.354881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.679 [2024-11-19 08:43:32.354887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:10.679 [2024-11-19 08:43:32.354894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:10.679 [2024-11-19 08:43:32.354901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:10.679 [2024-11-19 08:43:32.354914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:10.679 [2024-11-19 08:43:32.354932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:10.679 [2024-11-19 08:43:32.354951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:10.679 [2024-11-19 08:43:32.354969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.679 [2024-11-19 08:43:32.354982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:10.679 [2024-11-19 08:43:32.354988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:10.679 [2024-11-19 08:43:32.354995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.679 [2024-11-19 08:43:32.355001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:10.679 [2024-11-19 08:43:32.355007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:10.679 [2024-11-19 08:43:32.355013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.679 [2024-11-19 08:43:32.355019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:10.679 [2024-11-19 08:43:32.355024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:10.679 [2024-11-19 08:43:32.355030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.679 [2024-11-19 08:43:32.355036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:10.679 [2024-11-19 08:43:32.355042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:10.679 [2024-11-19 08:43:32.355048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.355055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:10.679 [2024-11-19 08:43:32.355061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:10.679 [2024-11-19 08:43:32.355068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.355075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:10.679 [2024-11-19 08:43:32.355083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:10.679 [2024-11-19 08:43:32.355092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.679 [2024-11-19 08:43:32.355098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.679 [2024-11-19 08:43:32.355105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:10.679 [2024-11-19 08:43:32.355111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:10.679 [2024-11-19 08:43:32.355117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:10.679 [2024-11-19 08:43:32.355124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:10.679 [2024-11-19 08:43:32.355130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:10.679 [2024-11-19 08:43:32.355136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:10.679 [2024-11-19 08:43:32.355144] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:10.679 [2024-11-19 08:43:32.355152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.679 [2024-11-19 08:43:32.355159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:10.679 [2024-11-19 08:43:32.355165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:10.679 [2024-11-19 08:43:32.355173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:10.679 [2024-11-19 08:43:32.355184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:10.679 [2024-11-19 08:43:32.355191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:10.679 [2024-11-19 08:43:32.355198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:10.679 [2024-11-19 08:43:32.355206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:10.679 [2024-11-19 08:43:32.355212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:10.679 [2024-11-19 08:43:32.355219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:10.679 [2024-11-19 08:43:32.355226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:10.680 [2024-11-19 08:43:32.355232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:10.680 [2024-11-19 08:43:32.355247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:10.680 [2024-11-19 08:43:32.355254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:10.680 [2024-11-19 08:43:32.355261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:10.680 [2024-11-19 08:43:32.355267] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:10.680 [2024-11-19 08:43:32.355281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.680 [2024-11-19 08:43:32.355289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:10.680 [2024-11-19 08:43:32.355297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:10.680 [2024-11-19 08:43:32.355304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:10.680 [2024-11-19 08:43:32.355314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:10.680 [2024-11-19 08:43:32.355321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.355328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:10.680 [2024-11-19 08:43:32.355335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:22:10.680 [2024-11-19 08:43:32.355348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.367164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.367198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.680 [2024-11-19 08:43:32.367209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.791 ms 00:22:10.680 [2024-11-19 08:43:32.367227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.367296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.367305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:10.680 [2024-11-19 08:43:32.367312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:10.680 [2024-11-19 08:43:32.367319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.388831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.388916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.680 [2024-11-19 08:43:32.388948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.497 ms 00:22:10.680 [2024-11-19 08:43:32.388971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.389057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.389084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.680 [2024-11-19 08:43:32.389107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:10.680 [2024-11-19 08:43:32.389154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.389894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.389939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.680 [2024-11-19 08:43:32.389964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:22:10.680 [2024-11-19 08:43:32.389985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.390283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.390327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.680 [2024-11-19 08:43:32.390350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:22:10.680 [2024-11-19 08:43:32.390371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.400320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.400371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.680 [2024-11-19 08:43:32.400391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.896 ms 00:22:10.680 [2024-11-19 08:43:32.400405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.404024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:10.680 [2024-11-19 08:43:32.404077] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:10.680 [2024-11-19 08:43:32.404098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.404113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:10.680 [2024-11-19 08:43:32.404129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 00:22:10.680 [2024-11-19 08:43:32.404142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.420592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.420631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:10.680 [2024-11-19 08:43:32.420642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.415 ms 00:22:10.680 [2024-11-19 08:43:32.420650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.422310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.422342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:10.680 [2024-11-19 08:43:32.422351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:22:10.680 [2024-11-19 08:43:32.422358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.423796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.423825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:10.680 [2024-11-19 08:43:32.423834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:22:10.680 [2024-11-19 08:43:32.423841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.424119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.424139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:10.680 [2024-11-19 08:43:32.424148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:22:10.680 [2024-11-19 08:43:32.424162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.444237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.444300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:10.680 [2024-11-19 08:43:32.444324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.089 ms 00:22:10.680 [2024-11-19 08:43:32.444332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.449994] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:10.680 [2024-11-19 08:43:32.452481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.452508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:10.680 [2024-11-19 08:43:32.452518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.117 ms 00:22:10.680 [2024-11-19 08:43:32.452535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.452589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.452599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:10.680 [2024-11-19 08:43:32.452607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:10.680 [2024-11-19 08:43:32.452614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.454147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.454199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:10.680 [2024-11-19 08:43:32.454212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:22:10.680 [2024-11-19 08:43:32.454219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.454250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.454261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:10.680 [2024-11-19 08:43:32.454270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:10.680 [2024-11-19 08:43:32.454277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.454310] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:10.680 [2024-11-19 08:43:32.454320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.454327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:10.680 [2024-11-19 08:43:32.454336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:10.680 [2024-11-19 08:43:32.454346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.458010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.458046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:10.680 [2024-11-19 08:43:32.458056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:22:10.680 [2024-11-19 08:43:32.458064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.458129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-19 08:43:32.458140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:10.680 [2024-11-19 08:43:32.458148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:10.680 [2024-11-19 08:43:32.458159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-19 08:43:32.459179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.770 ms, result 0 00:22:12.063  [2024-11-19T08:43:34.910Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-19T08:43:35.850Z] Copying: 49/1024 [MB] (26 MBps) [2024-11-19T08:43:36.790Z] Copying: 75/1024 [MB] (26 MBps) [2024-11-19T08:43:37.731Z] Copying: 102/1024 [MB] (26 MBps) [2024-11-19T08:43:38.672Z] Copying: 129/1024 [MB] (27 MBps) [2024-11-19T08:43:39.612Z] Copying: 157/1024 [MB] (27 MBps) [2024-11-19T08:43:40.995Z] Copying: 184/1024 [MB] (27 MBps) [2024-11-19T08:43:41.936Z] Copying: 212/1024 [MB] (27 MBps) [2024-11-19T08:43:42.877Z] Copying: 240/1024 [MB] (27 MBps) [2024-11-19T08:43:43.818Z] Copying: 267/1024 [MB] (27 MBps) [2024-11-19T08:43:44.758Z] Copying: 293/1024 [MB] (26 MBps) [2024-11-19T08:43:45.697Z] Copying: 321/1024 [MB] (27 MBps) [2024-11-19T08:43:46.637Z] Copying: 348/1024 [MB] (27 MBps) [2024-11-19T08:43:48.020Z] Copying: 375/1024 [MB] (27 MBps) [2024-11-19T08:43:48.589Z] Copying: 403/1024 [MB] (27 MBps) [2024-11-19T08:43:49.971Z] Copying: 431/1024 [MB] (27 MBps) [2024-11-19T08:43:50.911Z] Copying: 459/1024 [MB] (28 MBps) [2024-11-19T08:43:51.851Z] Copying: 487/1024 [MB] (27 MBps) [2024-11-19T08:43:52.792Z] Copying: 514/1024 [MB] (27 MBps) [2024-11-19T08:43:53.816Z] Copying: 542/1024 [MB] (27 MBps) [2024-11-19T08:43:54.755Z] Copying: 570/1024 [MB] (28 MBps) [2024-11-19T08:43:55.695Z] Copying: 599/1024 [MB] (28 MBps) [2024-11-19T08:43:56.635Z] Copying: 627/1024 [MB] (28 MBps) [2024-11-19T08:43:57.572Z] Copying: 655/1024 [MB] (28 MBps) [2024-11-19T08:43:58.952Z] Copying: 684/1024 [MB] (28 MBps) [2024-11-19T08:43:59.891Z] Copying: 713/1024 [MB] (28 MBps) [2024-11-19T08:44:00.830Z] Copying: 742/1024 [MB] (29 MBps) [2024-11-19T08:44:01.770Z] Copying: 771/1024 [MB] (29 MBps) [2024-11-19T08:44:02.708Z] Copying: 799/1024 [MB] (27 MBps) [2024-11-19T08:44:03.646Z] Copying: 827/1024 [MB] (27 MBps) [2024-11-19T08:44:04.585Z] Copying: 854/1024 [MB] (27 MBps) [2024-11-19T08:44:05.964Z] Copying: 882/1024 [MB] (27 MBps) [2024-11-19T08:44:06.902Z] Copying: 910/1024 [MB] (27 MBps) [2024-11-19T08:44:07.840Z] Copying: 937/1024 [MB] (27 MBps) [2024-11-19T08:44:08.780Z] Copying: 964/1024 [MB] (27 MBps) [2024-11-19T08:44:09.719Z] Copying: 991/1024 [MB] (27 MBps) [2024-11-19T08:44:09.719Z] Copying: 1019/1024 [MB] (27 MBps) [2024-11-19T08:44:09.979Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-19 08:44:09.889831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.890057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:48.072 [2024-11-19 08:44:09.890093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:48.072 [2024-11-19 08:44:09.890111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.072 [2024-11-19 08:44:09.890164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:48.072 [2024-11-19 08:44:09.891316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.891385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:48.072 [2024-11-19 08:44:09.891420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:22:48.072 [2024-11-19 08:44:09.891444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.072 [2024-11-19 08:44:09.891875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.891911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:48.072 [2024-11-19 08:44:09.891930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:22:48.072 [2024-11-19 08:44:09.891946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.072 [2024-11-19 08:44:09.901489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.901556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:48.072 [2024-11-19 08:44:09.901573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.527 ms 00:22:48.072 [2024-11-19 08:44:09.901586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.072 [2024-11-19 08:44:09.909387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.909435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:48.072 [2024-11-19 08:44:09.909446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.749 ms 00:22:48.072 [2024-11-19 08:44:09.909455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.072 [2024-11-19 08:44:09.911278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.911326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:48.072 [2024-11-19 08:44:09.911337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:22:48.072 [2024-11-19 08:44:09.911346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.072 [2024-11-19 08:44:09.916844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.072 [2024-11-19 08:44:09.916895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:48.072 [2024-11-19 08:44:09.916907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.473 ms 00:22:48.072 [2024-11-19 08:44:09.916935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.334 [2024-11-19 08:44:10.059882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.334 [2024-11-19 08:44:10.059920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:48.334 [2024-11-19 08:44:10.059955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 143.167 ms 00:22:48.334 [2024-11-19 08:44:10.059963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.334 [2024-11-19 08:44:10.062347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.334 [2024-11-19 08:44:10.062379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:48.334 [2024-11-19 08:44:10.062389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.371 ms 00:22:48.334 [2024-11-19 08:44:10.062396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.334 [2024-11-19 08:44:10.063878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.334 [2024-11-19 08:44:10.063910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:48.334 [2024-11-19 08:44:10.063919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:22:48.334 [2024-11-19 08:44:10.063926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.334 [2024-11-19 08:44:10.065133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.334 [2024-11-19 08:44:10.065166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:48.334 [2024-11-19 08:44:10.065181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:22:48.334 [2024-11-19 08:44:10.065187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.334 [2024-11-19 08:44:10.066414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.334 [2024-11-19 08:44:10.066447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:48.334 [2024-11-19 08:44:10.066456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:22:48.334 [2024-11-19 08:44:10.066463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.334 [2024-11-19 08:44:10.066490] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:48.334 [2024-11-19 08:44:10.066505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:48.334 [2024-11-19 08:44:10.066515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.066997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:48.334 [2024-11-19 08:44:10.067057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:48.335 [2024-11-19 08:44:10.067310] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:48.335 [2024-11-19 08:44:10.067317] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da080e19-877f-4306-b925-6630d39cb4d4 00:22:48.335 [2024-11-19 08:44:10.067324] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:48.335 [2024-11-19 08:44:10.067332] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21696 00:22:48.335 [2024-11-19 08:44:10.067345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20736 00:22:48.335 [2024-11-19 08:44:10.067360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0463 00:22:48.335 [2024-11-19 08:44:10.067367] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:48.335 [2024-11-19 08:44:10.067373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:48.335 [2024-11-19 08:44:10.067380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:48.335 [2024-11-19 08:44:10.067386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:48.335 [2024-11-19 08:44:10.067392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:48.335 [2024-11-19 08:44:10.067399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.335 [2024-11-19 08:44:10.067406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:48.335 [2024-11-19 08:44:10.067420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:22:48.335 [2024-11-19 08:44:10.067434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.069221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.335 [2024-11-19 08:44:10.069243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:48.335 [2024-11-19 08:44:10.069251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:22:48.335 [2024-11-19 08:44:10.069261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.069363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.335 [2024-11-19 08:44:10.069371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:48.335 [2024-11-19 08:44:10.069386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:48.335 [2024-11-19 08:44:10.069393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.075135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.075161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:48.335 [2024-11-19 08:44:10.075170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.075178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.075220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.075229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.335 [2024-11-19 08:44:10.075236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.075242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.075304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.075316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.335 [2024-11-19 08:44:10.075323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.075330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.075345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.075352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.335 [2024-11-19 08:44:10.075359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.075366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.089053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.089103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.335 [2024-11-19 08:44:10.089113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.089120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.335 [2024-11-19 08:44:10.097064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.335 [2024-11-19 08:44:10.097160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.335 [2024-11-19 08:44:10.097208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.335 [2024-11-19 08:44:10.097309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:48.335 [2024-11-19 08:44:10.097363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.335 [2024-11-19 08:44:10.097426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.335 [2024-11-19 08:44:10.097481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.335 [2024-11-19 08:44:10.097488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.335 [2024-11-19 08:44:10.097494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.335 [2024-11-19 08:44:10.097618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 208.184 ms, result 0 00:22:48.595 00:22:48.595 00:22:48.595 08:44:10 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:50.504 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:50.504 08:44:11 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:50.504 08:44:11 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:50.504 08:44:11 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86949 00:22:50.504 08:44:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 86949 ']' 00:22:50.504 Process with pid 86949 is not found 00:22:50.504 08:44:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 86949 00:22:50.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86949) - No such process 00:22:50.504 08:44:12 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 86949 is not found' 00:22:50.504 Remove shared memory files 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:50.504 08:44:12 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:50.504 00:22:50.504 real 3m7.711s 00:22:50.504 user 2m56.708s 00:22:50.504 sys 0m11.646s 00:22:50.504 08:44:12 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.504 08:44:12 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:50.504 ************************************ 00:22:50.504 END TEST ftl_restore 00:22:50.504 ************************************ 00:22:50.504 08:44:12 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:50.504 08:44:12 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:50.504 08:44:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.504 08:44:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:50.504 ************************************ 00:22:50.504 START TEST ftl_dirty_shutdown 00:22:50.504 ************************************ 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:50.504 * Looking for test storage... 00:22:50.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.504 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.505 --rc genhtml_branch_coverage=1 00:22:50.505 --rc genhtml_function_coverage=1 00:22:50.505 --rc genhtml_legend=1 00:22:50.505 --rc geninfo_all_blocks=1 00:22:50.505 --rc geninfo_unexecuted_blocks=1 00:22:50.505 00:22:50.505 ' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.505 --rc genhtml_branch_coverage=1 00:22:50.505 --rc genhtml_function_coverage=1 00:22:50.505 --rc genhtml_legend=1 00:22:50.505 --rc geninfo_all_blocks=1 00:22:50.505 --rc geninfo_unexecuted_blocks=1 00:22:50.505 00:22:50.505 ' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.505 --rc genhtml_branch_coverage=1 00:22:50.505 --rc genhtml_function_coverage=1 00:22:50.505 --rc genhtml_legend=1 00:22:50.505 --rc geninfo_all_blocks=1 00:22:50.505 --rc geninfo_unexecuted_blocks=1 00:22:50.505 00:22:50.505 ' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.505 --rc genhtml_branch_coverage=1 00:22:50.505 --rc genhtml_function_coverage=1 00:22:50.505 --rc genhtml_legend=1 00:22:50.505 --rc geninfo_all_blocks=1 00:22:50.505 --rc geninfo_unexecuted_blocks=1 00:22:50.505 00:22:50.505 ' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:50.505 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88998 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88998 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 88998 ']' 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.765 08:44:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 [2024-11-19 08:44:12.511801] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:50.765 [2024-11-19 08:44:12.511959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88998 ] 00:22:50.765 [2024-11-19 08:44:12.665811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.025 [2024-11-19 08:44:12.690888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.594 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.594 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:51.594 08:44:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:51.594 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:51.594 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:51.594 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:51.595 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:51.595 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:51.854 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:52.114 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:52.114 { 00:22:52.114 "name": "nvme0n1", 00:22:52.114 "aliases": [ 00:22:52.114 "c41afe12-3911-4b69-af8d-bf1b7ae2f32b" 00:22:52.114 ], 00:22:52.114 "product_name": "NVMe disk", 00:22:52.114 "block_size": 4096, 00:22:52.114 "num_blocks": 1310720, 00:22:52.114 "uuid": "c41afe12-3911-4b69-af8d-bf1b7ae2f32b", 00:22:52.114 "numa_id": -1, 00:22:52.114 "assigned_rate_limits": { 00:22:52.114 "rw_ios_per_sec": 0, 00:22:52.114 "rw_mbytes_per_sec": 0, 00:22:52.114 "r_mbytes_per_sec": 0, 00:22:52.114 "w_mbytes_per_sec": 0 00:22:52.114 }, 00:22:52.114 "claimed": true, 00:22:52.114 "claim_type": "read_many_write_one", 00:22:52.114 "zoned": false, 00:22:52.114 "supported_io_types": { 00:22:52.114 "read": true, 00:22:52.114 "write": true, 00:22:52.114 "unmap": true, 00:22:52.114 "flush": true, 00:22:52.114 "reset": true, 00:22:52.114 "nvme_admin": true, 00:22:52.114 "nvme_io": true, 00:22:52.114 "nvme_io_md": false, 00:22:52.114 "write_zeroes": true, 00:22:52.114 "zcopy": false, 00:22:52.114 "get_zone_info": false, 00:22:52.114 "zone_management": false, 00:22:52.114 "zone_append": false, 00:22:52.114 "compare": true, 00:22:52.114 "compare_and_write": false, 00:22:52.114 "abort": true, 00:22:52.114 "seek_hole": false, 00:22:52.114 "seek_data": false, 00:22:52.114 "copy": true, 00:22:52.114 "nvme_iov_md": false 00:22:52.114 }, 00:22:52.114 "driver_specific": { 00:22:52.114 "nvme": [ 00:22:52.114 { 00:22:52.114 "pci_address": "0000:00:11.0", 00:22:52.114 "trid": { 00:22:52.114 "trtype": "PCIe", 00:22:52.114 "traddr": "0000:00:11.0" 00:22:52.114 }, 00:22:52.114 "ctrlr_data": { 00:22:52.114 "cntlid": 0, 00:22:52.114 "vendor_id": "0x1b36", 00:22:52.115 "model_number": "QEMU NVMe Ctrl", 00:22:52.115 "serial_number": "12341", 00:22:52.115 "firmware_revision": "8.0.0", 00:22:52.115 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:52.115 "oacs": { 00:22:52.115 "security": 0, 00:22:52.115 "format": 1, 00:22:52.115 "firmware": 0, 00:22:52.115 "ns_manage": 1 00:22:52.115 }, 00:22:52.115 "multi_ctrlr": false, 00:22:52.115 "ana_reporting": false 00:22:52.115 }, 00:22:52.115 "vs": { 00:22:52.115 "nvme_version": "1.4" 00:22:52.115 }, 00:22:52.115 "ns_data": { 00:22:52.115 "id": 1, 00:22:52.115 "can_share": false 00:22:52.115 } 00:22:52.115 } 00:22:52.115 ], 00:22:52.115 "mp_policy": "active_passive" 00:22:52.115 } 00:22:52.115 } 00:22:52.115 ]' 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:52.115 08:44:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:52.376 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=780b27f9-6687-4b93-bae5-2bf19eb228ca 00:22:52.376 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:52.376 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 780b27f9-6687-4b93-bae5-2bf19eb228ca 00:22:52.376 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:52.635 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=da7d8577-48fe-4bbd-8e83-946d3ecb133b 00:22:52.635 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u da7d8577-48fe-4bbd-8e83-946d3ecb133b 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:52.896 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.155 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.155 { 00:22:53.155 "name": "072f1b28-cb37-4fa5-ba4c-bd07982e6573", 00:22:53.155 "aliases": [ 00:22:53.155 "lvs/nvme0n1p0" 00:22:53.155 ], 00:22:53.155 "product_name": "Logical Volume", 00:22:53.155 "block_size": 4096, 00:22:53.155 "num_blocks": 26476544, 00:22:53.155 "uuid": "072f1b28-cb37-4fa5-ba4c-bd07982e6573", 00:22:53.155 "assigned_rate_limits": { 00:22:53.155 "rw_ios_per_sec": 0, 00:22:53.155 "rw_mbytes_per_sec": 0, 00:22:53.155 "r_mbytes_per_sec": 0, 00:22:53.155 "w_mbytes_per_sec": 0 00:22:53.155 }, 00:22:53.155 "claimed": false, 00:22:53.155 "zoned": false, 00:22:53.155 "supported_io_types": { 00:22:53.155 "read": true, 00:22:53.155 "write": true, 00:22:53.155 "unmap": true, 00:22:53.155 "flush": false, 00:22:53.155 "reset": true, 00:22:53.155 "nvme_admin": false, 00:22:53.155 "nvme_io": false, 00:22:53.155 "nvme_io_md": false, 00:22:53.155 "write_zeroes": true, 00:22:53.155 "zcopy": false, 00:22:53.155 "get_zone_info": false, 00:22:53.155 "zone_management": false, 00:22:53.155 "zone_append": false, 00:22:53.155 "compare": false, 00:22:53.155 "compare_and_write": false, 00:22:53.155 "abort": false, 00:22:53.155 "seek_hole": true, 00:22:53.155 "seek_data": true, 00:22:53.155 "copy": false, 00:22:53.156 "nvme_iov_md": false 00:22:53.156 }, 00:22:53.156 "driver_specific": { 00:22:53.156 "lvol": { 00:22:53.156 "lvol_store_uuid": "da7d8577-48fe-4bbd-8e83-946d3ecb133b", 00:22:53.156 "base_bdev": "nvme0n1", 00:22:53.156 "thin_provision": true, 00:22:53.156 "num_allocated_clusters": 0, 00:22:53.156 "snapshot": false, 00:22:53.156 "clone": false, 00:22:53.156 "esnap_clone": false 00:22:53.156 } 00:22:53.156 } 00:22:53.156 } 00:22:53.156 ]' 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:53.156 08:44:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:53.416 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.687 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.687 { 00:22:53.687 "name": "072f1b28-cb37-4fa5-ba4c-bd07982e6573", 00:22:53.687 "aliases": [ 00:22:53.687 "lvs/nvme0n1p0" 00:22:53.687 ], 00:22:53.687 "product_name": "Logical Volume", 00:22:53.687 "block_size": 4096, 00:22:53.687 "num_blocks": 26476544, 00:22:53.687 "uuid": "072f1b28-cb37-4fa5-ba4c-bd07982e6573", 00:22:53.687 "assigned_rate_limits": { 00:22:53.687 "rw_ios_per_sec": 0, 00:22:53.687 "rw_mbytes_per_sec": 0, 00:22:53.687 "r_mbytes_per_sec": 0, 00:22:53.687 "w_mbytes_per_sec": 0 00:22:53.687 }, 00:22:53.687 "claimed": false, 00:22:53.687 "zoned": false, 00:22:53.687 "supported_io_types": { 00:22:53.687 "read": true, 00:22:53.687 "write": true, 00:22:53.687 "unmap": true, 00:22:53.687 "flush": false, 00:22:53.687 "reset": true, 00:22:53.687 "nvme_admin": false, 00:22:53.687 "nvme_io": false, 00:22:53.687 "nvme_io_md": false, 00:22:53.687 "write_zeroes": true, 00:22:53.687 "zcopy": false, 00:22:53.687 "get_zone_info": false, 00:22:53.687 "zone_management": false, 00:22:53.687 "zone_append": false, 00:22:53.687 "compare": false, 00:22:53.687 "compare_and_write": false, 00:22:53.687 "abort": false, 00:22:53.687 "seek_hole": true, 00:22:53.687 "seek_data": true, 00:22:53.687 "copy": false, 00:22:53.688 "nvme_iov_md": false 00:22:53.688 }, 00:22:53.688 "driver_specific": { 00:22:53.688 "lvol": { 00:22:53.688 "lvol_store_uuid": "da7d8577-48fe-4bbd-8e83-946d3ecb133b", 00:22:53.688 "base_bdev": "nvme0n1", 00:22:53.688 "thin_provision": true, 00:22:53.688 "num_allocated_clusters": 0, 00:22:53.688 "snapshot": false, 00:22:53.688 "clone": false, 00:22:53.688 "esnap_clone": false 00:22:53.688 } 00:22:53.688 } 00:22:53.688 } 00:22:53.688 ]' 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:53.688 08:44:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 072f1b28-cb37-4fa5-ba4c-bd07982e6573 00:22:53.989 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.989 { 00:22:53.989 "name": "072f1b28-cb37-4fa5-ba4c-bd07982e6573", 00:22:53.989 "aliases": [ 00:22:53.989 "lvs/nvme0n1p0" 00:22:53.989 ], 00:22:53.989 "product_name": "Logical Volume", 00:22:53.989 "block_size": 4096, 00:22:53.989 "num_blocks": 26476544, 00:22:53.989 "uuid": "072f1b28-cb37-4fa5-ba4c-bd07982e6573", 00:22:53.989 "assigned_rate_limits": { 00:22:53.989 "rw_ios_per_sec": 0, 00:22:53.989 "rw_mbytes_per_sec": 0, 00:22:53.989 "r_mbytes_per_sec": 0, 00:22:53.989 "w_mbytes_per_sec": 0 00:22:53.989 }, 00:22:53.989 "claimed": false, 00:22:53.989 "zoned": false, 00:22:53.989 "supported_io_types": { 00:22:53.989 "read": true, 00:22:53.989 "write": true, 00:22:53.989 "unmap": true, 00:22:53.989 "flush": false, 00:22:53.989 "reset": true, 00:22:53.989 "nvme_admin": false, 00:22:53.989 "nvme_io": false, 00:22:53.989 "nvme_io_md": false, 00:22:53.989 "write_zeroes": true, 00:22:53.989 "zcopy": false, 00:22:53.989 "get_zone_info": false, 00:22:53.989 "zone_management": false, 00:22:53.989 "zone_append": false, 00:22:53.989 "compare": false, 00:22:53.989 "compare_and_write": false, 00:22:53.989 "abort": false, 00:22:53.990 "seek_hole": true, 00:22:53.990 "seek_data": true, 00:22:53.990 "copy": false, 00:22:53.990 "nvme_iov_md": false 00:22:53.990 }, 00:22:53.990 "driver_specific": { 00:22:53.990 "lvol": { 00:22:53.990 "lvol_store_uuid": "da7d8577-48fe-4bbd-8e83-946d3ecb133b", 00:22:53.990 "base_bdev": "nvme0n1", 00:22:53.990 "thin_provision": true, 00:22:53.990 "num_allocated_clusters": 0, 00:22:53.990 "snapshot": false, 00:22:53.990 "clone": false, 00:22:53.990 "esnap_clone": false 00:22:53.990 } 00:22:53.990 } 00:22:53.990 } 00:22:53.990 ]' 00:22:53.990 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 072f1b28-cb37-4fa5-ba4c-bd07982e6573 --l2p_dram_limit 10' 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:54.251 08:44:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 072f1b28-cb37-4fa5-ba4c-bd07982e6573 --l2p_dram_limit 10 -c nvc0n1p0 00:22:54.251 [2024-11-19 08:44:16.137801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.137853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:54.251 [2024-11-19 08:44:16.137866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.251 [2024-11-19 08:44:16.137875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.137942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.137955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.251 [2024-11-19 08:44:16.137964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:54.251 [2024-11-19 08:44:16.137983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.138003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:54.251 [2024-11-19 08:44:16.138295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:54.251 [2024-11-19 08:44:16.138321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.138330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.251 [2024-11-19 08:44:16.138338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:22:54.251 [2024-11-19 08:44:16.138347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.138377] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 22820913-26bd-4288-afb2-5561b9b6e2aa 00:22:54.251 [2024-11-19 08:44:16.139783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.139809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:54.251 [2024-11-19 08:44:16.139831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:54.251 [2024-11-19 08:44:16.139839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.147394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.147428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.251 [2024-11-19 08:44:16.147442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.494 ms 00:22:54.251 [2024-11-19 08:44:16.147449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.147528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.147542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.251 [2024-11-19 08:44:16.147552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:54.251 [2024-11-19 08:44:16.147559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.147630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.147643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:54.251 [2024-11-19 08:44:16.147652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:54.251 [2024-11-19 08:44:16.147659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.147693] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:54.251 [2024-11-19 08:44:16.149451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.149483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.251 [2024-11-19 08:44:16.149492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:22:54.251 [2024-11-19 08:44:16.149500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.149532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.149543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:54.251 [2024-11-19 08:44:16.149561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:54.251 [2024-11-19 08:44:16.149573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.149592] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:54.251 [2024-11-19 08:44:16.149748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:54.251 [2024-11-19 08:44:16.149771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:54.251 [2024-11-19 08:44:16.149785] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:54.251 [2024-11-19 08:44:16.149795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:54.251 [2024-11-19 08:44:16.149810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:54.251 [2024-11-19 08:44:16.149819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:54.251 [2024-11-19 08:44:16.149831] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:54.251 [2024-11-19 08:44:16.149838] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:54.251 [2024-11-19 08:44:16.149847] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:54.251 [2024-11-19 08:44:16.149855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.149866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:54.251 [2024-11-19 08:44:16.149873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:22:54.251 [2024-11-19 08:44:16.149890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.149977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.251 [2024-11-19 08:44:16.149997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:54.251 [2024-11-19 08:44:16.150006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:54.251 [2024-11-19 08:44:16.150018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.251 [2024-11-19 08:44:16.150102] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:54.251 [2024-11-19 08:44:16.150134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:54.251 [2024-11-19 08:44:16.150142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.251 [2024-11-19 08:44:16.150159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:54.251 [2024-11-19 08:44:16.150178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:54.251 [2024-11-19 08:44:16.150192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:54.251 [2024-11-19 08:44:16.150199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.251 [2024-11-19 08:44:16.150215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:54.251 [2024-11-19 08:44:16.150223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:54.251 [2024-11-19 08:44:16.150229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.251 [2024-11-19 08:44:16.150239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:54.251 [2024-11-19 08:44:16.150246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:54.251 [2024-11-19 08:44:16.150254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:54.251 [2024-11-19 08:44:16.150269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:54.251 [2024-11-19 08:44:16.150276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:54.251 [2024-11-19 08:44:16.150291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.251 [2024-11-19 08:44:16.150306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:54.251 [2024-11-19 08:44:16.150314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.251 [2024-11-19 08:44:16.150327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:54.251 [2024-11-19 08:44:16.150333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.251 [2024-11-19 08:44:16.150347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:54.251 [2024-11-19 08:44:16.150357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:54.251 [2024-11-19 08:44:16.150363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.252 [2024-11-19 08:44:16.150372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:54.252 [2024-11-19 08:44:16.150379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:54.252 [2024-11-19 08:44:16.150386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.252 [2024-11-19 08:44:16.150393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:54.252 [2024-11-19 08:44:16.150401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:54.252 [2024-11-19 08:44:16.150407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.252 [2024-11-19 08:44:16.150415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:54.252 [2024-11-19 08:44:16.150422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:54.252 [2024-11-19 08:44:16.150429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.252 [2024-11-19 08:44:16.150436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:54.252 [2024-11-19 08:44:16.150444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:54.252 [2024-11-19 08:44:16.150451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.252 [2024-11-19 08:44:16.150459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:54.252 [2024-11-19 08:44:16.150467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:54.252 [2024-11-19 08:44:16.150477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.252 [2024-11-19 08:44:16.150497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.252 [2024-11-19 08:44:16.150513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:54.252 [2024-11-19 08:44:16.150520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:54.252 [2024-11-19 08:44:16.150528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:54.252 [2024-11-19 08:44:16.150535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:54.252 [2024-11-19 08:44:16.150544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:54.252 [2024-11-19 08:44:16.150550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:54.252 [2024-11-19 08:44:16.150564] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:54.252 [2024-11-19 08:44:16.150576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:54.252 [2024-11-19 08:44:16.150594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:54.252 [2024-11-19 08:44:16.150603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:54.252 [2024-11-19 08:44:16.150610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:54.252 [2024-11-19 08:44:16.150619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:54.252 [2024-11-19 08:44:16.150629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:54.252 [2024-11-19 08:44:16.150640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:54.252 [2024-11-19 08:44:16.150646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:54.252 [2024-11-19 08:44:16.150654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:54.252 [2024-11-19 08:44:16.150661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:54.252 [2024-11-19 08:44:16.150702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:54.252 [2024-11-19 08:44:16.150710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:54.252 [2024-11-19 08:44:16.150744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:54.252 [2024-11-19 08:44:16.150753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:54.252 [2024-11-19 08:44:16.150761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:54.252 [2024-11-19 08:44:16.150771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.252 [2024-11-19 08:44:16.150780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:54.252 [2024-11-19 08:44:16.150793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:22:54.252 [2024-11-19 08:44:16.150801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.252 [2024-11-19 08:44:16.150844] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:54.252 [2024-11-19 08:44:16.150854] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:58.448 [2024-11-19 08:44:20.168937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.169036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:58.448 [2024-11-19 08:44:20.169056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4025.834 ms 00:22:58.448 [2024-11-19 08:44:20.169065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.189184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.189246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.448 [2024-11-19 08:44:20.189263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.026 ms 00:22:58.448 [2024-11-19 08:44:20.189272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.189415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.189425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.448 [2024-11-19 08:44:20.189437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:58.448 [2024-11-19 08:44:20.189444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.207338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.207415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.448 [2024-11-19 08:44:20.207439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.877 ms 00:22:58.448 [2024-11-19 08:44:20.207451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.207497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.207506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.448 [2024-11-19 08:44:20.207517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:58.448 [2024-11-19 08:44:20.207524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.208367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.208389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.448 [2024-11-19 08:44:20.208402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:22:58.448 [2024-11-19 08:44:20.208410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.208531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.208550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.448 [2024-11-19 08:44:20.208560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:58.448 [2024-11-19 08:44:20.208568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.220627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.220667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.448 [2024-11-19 08:44:20.220680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.055 ms 00:22:58.448 [2024-11-19 08:44:20.220688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.229558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.448 [2024-11-19 08:44:20.234756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.234785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.448 [2024-11-19 08:44:20.234795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:22:58.448 [2024-11-19 08:44:20.234805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.331459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.331516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:58.448 [2024-11-19 08:44:20.331534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.810 ms 00:22:58.448 [2024-11-19 08:44:20.331548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.331752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.331768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.448 [2024-11-19 08:44:20.331790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:22:58.448 [2024-11-19 08:44:20.331801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.335675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.335712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:58.448 [2024-11-19 08:44:20.335752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.850 ms 00:22:58.448 [2024-11-19 08:44:20.335772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.338577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.338611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:58.448 [2024-11-19 08:44:20.338637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:22:58.448 [2024-11-19 08:44:20.338647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.448 [2024-11-19 08:44:20.338973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.448 [2024-11-19 08:44:20.339000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.448 [2024-11-19 08:44:20.339025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:22:58.448 [2024-11-19 08:44:20.339039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.380292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.708 [2024-11-19 08:44:20.380334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:58.708 [2024-11-19 08:44:20.380349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.310 ms 00:22:58.708 [2024-11-19 08:44:20.380360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.385987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.708 [2024-11-19 08:44:20.386027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:58.708 [2024-11-19 08:44:20.386039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.596 ms 00:22:58.708 [2024-11-19 08:44:20.386050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.389286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.708 [2024-11-19 08:44:20.389321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:58.708 [2024-11-19 08:44:20.389348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:22:58.708 [2024-11-19 08:44:20.389358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.392912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.708 [2024-11-19 08:44:20.392948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.708 [2024-11-19 08:44:20.392958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:22:58.708 [2024-11-19 08:44:20.392970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.393007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.708 [2024-11-19 08:44:20.393019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.708 [2024-11-19 08:44:20.393028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:58.708 [2024-11-19 08:44:20.393038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.393115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.708 [2024-11-19 08:44:20.393127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.708 [2024-11-19 08:44:20.393151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:58.708 [2024-11-19 08:44:20.393165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.708 [2024-11-19 08:44:20.394552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4264.450 ms, result 0 00:22:58.708 { 00:22:58.708 "name": "ftl0", 00:22:58.708 "uuid": "22820913-26bd-4288-afb2-5561b9b6e2aa" 00:22:58.708 } 00:22:58.708 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:58.708 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:58.968 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:58.968 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:58.968 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:59.227 /dev/nbd0 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:59.227 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:59.227 1+0 records in 00:22:59.227 1+0 records out 00:22:59.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045789 s, 8.9 MB/s 00:22:59.228 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:59.228 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:59.228 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:59.228 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:59.228 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:59.228 08:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:59.228 [2024-11-19 08:44:21.017342] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:59.228 [2024-11-19 08:44:21.017499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89144 ] 00:22:59.487 [2024-11-19 08:44:21.177342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.487 [2024-11-19 08:44:21.202063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.425  [2024-11-19T08:44:23.269Z] Copying: 235/1024 [MB] (235 MBps) [2024-11-19T08:44:24.647Z] Copying: 474/1024 [MB] (238 MBps) [2024-11-19T08:44:25.585Z] Copying: 711/1024 [MB] (237 MBps) [2024-11-19T08:44:25.844Z] Copying: 940/1024 [MB] (229 MBps) [2024-11-19T08:44:26.105Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:23:04.198 00:23:04.198 08:44:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.575 08:44:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:05.834 [2024-11-19 08:44:27.503570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:05.834 [2024-11-19 08:44:27.503760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89213 ] 00:23:05.834 [2024-11-19 08:44:27.660191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.834 [2024-11-19 08:44:27.684867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.212  [2024-11-19T08:44:30.057Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-19T08:44:30.995Z] Copying: 36/1024 [MB] (18 MBps) [2024-11-19T08:44:31.934Z] Copying: 56/1024 [MB] (19 MBps) [2024-11-19T08:44:32.872Z] Copying: 74/1024 [MB] (18 MBps) [2024-11-19T08:44:33.809Z] Copying: 93/1024 [MB] (18 MBps) [2024-11-19T08:44:34.747Z] Copying: 113/1024 [MB] (19 MBps) [2024-11-19T08:44:36.189Z] Copying: 132/1024 [MB] (19 MBps) [2024-11-19T08:44:36.757Z] Copying: 143060/1048576 [kB] (7804 kBps) [2024-11-19T08:44:38.136Z] Copying: 158/1024 [MB] (18 MBps) [2024-11-19T08:44:39.073Z] Copying: 177/1024 [MB] (18 MBps) [2024-11-19T08:44:40.010Z] Copying: 195/1024 [MB] (18 MBps) [2024-11-19T08:44:40.947Z] Copying: 214/1024 [MB] (18 MBps) [2024-11-19T08:44:41.884Z] Copying: 233/1024 [MB] (18 MBps) [2024-11-19T08:44:42.821Z] Copying: 251/1024 [MB] (18 MBps) [2024-11-19T08:44:43.758Z] Copying: 270/1024 [MB] (18 MBps) [2024-11-19T08:44:45.137Z] Copying: 288/1024 [MB] (18 MBps) [2024-11-19T08:44:46.074Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-19T08:44:47.012Z] Copying: 326/1024 [MB] (18 MBps) [2024-11-19T08:44:47.950Z] Copying: 344/1024 [MB] (18 MBps) [2024-11-19T08:44:48.887Z] Copying: 363/1024 [MB] (18 MBps) [2024-11-19T08:44:49.824Z] Copying: 382/1024 [MB] (18 MBps) [2024-11-19T08:44:50.762Z] Copying: 401/1024 [MB] (18 MBps) [2024-11-19T08:44:52.141Z] Copying: 419/1024 [MB] (18 MBps) [2024-11-19T08:44:52.709Z] Copying: 437/1024 [MB] (18 MBps) [2024-11-19T08:44:54.086Z] Copying: 456/1024 [MB] (18 MBps) [2024-11-19T08:44:55.024Z] Copying: 475/1024 [MB] (18 MBps) [2024-11-19T08:44:55.962Z] Copying: 494/1024 [MB] (18 MBps) [2024-11-19T08:44:56.930Z] Copying: 513/1024 [MB] (19 MBps) [2024-11-19T08:44:57.869Z] Copying: 532/1024 [MB] (19 MBps) [2024-11-19T08:44:58.808Z] Copying: 551/1024 [MB] (18 MBps) [2024-11-19T08:44:59.746Z] Copying: 570/1024 [MB] (19 MBps) [2024-11-19T08:45:01.127Z] Copying: 589/1024 [MB] (18 MBps) [2024-11-19T08:45:01.696Z] Copying: 607/1024 [MB] (18 MBps) [2024-11-19T08:45:03.078Z] Copying: 626/1024 [MB] (18 MBps) [2024-11-19T08:45:04.017Z] Copying: 644/1024 [MB] (18 MBps) [2024-11-19T08:45:04.956Z] Copying: 662/1024 [MB] (17 MBps) [2024-11-19T08:45:05.895Z] Copying: 680/1024 [MB] (18 MBps) [2024-11-19T08:45:06.834Z] Copying: 699/1024 [MB] (18 MBps) [2024-11-19T08:45:07.773Z] Copying: 718/1024 [MB] (18 MBps) [2024-11-19T08:45:08.711Z] Copying: 737/1024 [MB] (18 MBps) [2024-11-19T08:45:10.091Z] Copying: 755/1024 [MB] (18 MBps) [2024-11-19T08:45:11.028Z] Copying: 774/1024 [MB] (18 MBps) [2024-11-19T08:45:11.965Z] Copying: 793/1024 [MB] (18 MBps) [2024-11-19T08:45:12.903Z] Copying: 812/1024 [MB] (18 MBps) [2024-11-19T08:45:13.841Z] Copying: 830/1024 [MB] (18 MBps) [2024-11-19T08:45:14.782Z] Copying: 849/1024 [MB] (18 MBps) [2024-11-19T08:45:15.722Z] Copying: 867/1024 [MB] (18 MBps) [2024-11-19T08:45:16.710Z] Copying: 886/1024 [MB] (18 MBps) [2024-11-19T08:45:17.663Z] Copying: 904/1024 [MB] (18 MBps) [2024-11-19T08:45:19.042Z] Copying: 922/1024 [MB] (18 MBps) [2024-11-19T08:45:19.988Z] Copying: 941/1024 [MB] (18 MBps) [2024-11-19T08:45:20.928Z] Copying: 961/1024 [MB] (19 MBps) [2024-11-19T08:45:21.867Z] Copying: 980/1024 [MB] (19 MBps) [2024-11-19T08:45:22.805Z] Copying: 1000/1024 [MB] (19 MBps) [2024-11-19T08:45:23.066Z] Copying: 1020/1024 [MB] (19 MBps) [2024-11-19T08:45:23.326Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:24:01.419 00:24:01.419 08:45:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:01.419 08:45:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:01.680 08:45:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:01.680 [2024-11-19 08:45:23.537309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.537379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:01.680 [2024-11-19 08:45:23.537398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:01.680 [2024-11-19 08:45:23.537406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.537435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:01.680 [2024-11-19 08:45:23.538748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.538783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:01.680 [2024-11-19 08:45:23.538795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:24:01.680 [2024-11-19 08:45:23.538804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.541097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.541138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:01.680 [2024-11-19 08:45:23.541149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:24:01.680 [2024-11-19 08:45:23.541160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.559211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.559251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:01.680 [2024-11-19 08:45:23.559282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.068 ms 00:24:01.680 [2024-11-19 08:45:23.559292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.564049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.564084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:01.680 [2024-11-19 08:45:23.564093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:24:01.680 [2024-11-19 08:45:23.564102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.566275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.566313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:01.680 [2024-11-19 08:45:23.566323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:24:01.680 [2024-11-19 08:45:23.566348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.572934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.572977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:01.680 [2024-11-19 08:45:23.572988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:24:01.680 [2024-11-19 08:45:23.573000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.573135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.573149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:01.680 [2024-11-19 08:45:23.573158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:01.680 [2024-11-19 08:45:23.573168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.575778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.575823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:01.680 [2024-11-19 08:45:23.575847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:24:01.680 [2024-11-19 08:45:23.575856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.577605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.577658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:01.680 [2024-11-19 08:45:23.577667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:24:01.680 [2024-11-19 08:45:23.577676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.578974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.579016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:01.680 [2024-11-19 08:45:23.579026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:24:01.680 [2024-11-19 08:45:23.579036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.580268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.680 [2024-11-19 08:45:23.580306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:01.680 [2024-11-19 08:45:23.580315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:24:01.680 [2024-11-19 08:45:23.580324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.680 [2024-11-19 08:45:23.580352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:01.680 [2024-11-19 08:45:23.580384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:01.680 [2024-11-19 08:45:23.580613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.580991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:01.681 [2024-11-19 08:45:23.581403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:01.681 [2024-11-19 08:45:23.581412] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22820913-26bd-4288-afb2-5561b9b6e2aa 00:24:01.681 [2024-11-19 08:45:23.581423] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:01.681 [2024-11-19 08:45:23.581432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:01.681 [2024-11-19 08:45:23.581443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:01.681 [2024-11-19 08:45:23.581451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:01.681 [2024-11-19 08:45:23.581461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:01.681 [2024-11-19 08:45:23.581469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:01.681 [2024-11-19 08:45:23.581480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:01.681 [2024-11-19 08:45:23.581487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:01.681 [2024-11-19 08:45:23.581496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:01.682 [2024-11-19 08:45:23.581504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.682 [2024-11-19 08:45:23.581530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:01.682 [2024-11-19 08:45:23.581539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:24:01.682 [2024-11-19 08:45:23.581553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.584601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.942 [2024-11-19 08:45:23.584634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:01.942 [2024-11-19 08:45:23.584644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:24:01.942 [2024-11-19 08:45:23.584654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.584862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.942 [2024-11-19 08:45:23.584877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:01.942 [2024-11-19 08:45:23.584888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:24:01.942 [2024-11-19 08:45:23.584914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.595979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.942 [2024-11-19 08:45:23.596018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.942 [2024-11-19 08:45:23.596045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.942 [2024-11-19 08:45:23.596069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.596133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.942 [2024-11-19 08:45:23.596144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.942 [2024-11-19 08:45:23.596159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.942 [2024-11-19 08:45:23.596169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.596246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.942 [2024-11-19 08:45:23.596265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.942 [2024-11-19 08:45:23.596273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.942 [2024-11-19 08:45:23.596282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.596302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.942 [2024-11-19 08:45:23.596312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.942 [2024-11-19 08:45:23.596319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.942 [2024-11-19 08:45:23.596332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.622857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.942 [2024-11-19 08:45:23.622921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.942 [2024-11-19 08:45:23.622950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.942 [2024-11-19 08:45:23.622962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.942 [2024-11-19 08:45:23.637711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.637764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.943 [2024-11-19 08:45:23.637796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.637807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.637890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.637909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.943 [2024-11-19 08:45:23.637917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.637927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.638041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.638053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.943 [2024-11-19 08:45:23.638077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.638087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.638185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.638214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.943 [2024-11-19 08:45:23.638223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.638233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.638275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.638288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:01.943 [2024-11-19 08:45:23.638297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.638307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.638359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.638374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.943 [2024-11-19 08:45:23.638382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.638392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.638456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.943 [2024-11-19 08:45:23.638469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.943 [2024-11-19 08:45:23.638477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.943 [2024-11-19 08:45:23.638487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.943 [2024-11-19 08:45:23.638646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 101.483 ms, result 0 00:24:01.943 true 00:24:01.943 08:45:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88998 00:24:01.943 08:45:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88998 00:24:01.943 08:45:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:01.943 [2024-11-19 08:45:23.764222] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:01.943 [2024-11-19 08:45:23.764375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89794 ] 00:24:02.202 [2024-11-19 08:45:23.920009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.202 [2024-11-19 08:45:23.964010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.581  [2024-11-19T08:45:26.425Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-19T08:45:27.361Z] Copying: 485/1024 [MB] (243 MBps) [2024-11-19T08:45:28.297Z] Copying: 726/1024 [MB] (240 MBps) [2024-11-19T08:45:28.557Z] Copying: 962/1024 [MB] (236 MBps) [2024-11-19T08:45:28.818Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:24:06.911 00:24:06.911 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88998 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:06.911 08:45:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.911 [2024-11-19 08:45:28.779051] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:06.911 [2024-11-19 08:45:28.779183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89848 ] 00:24:07.171 [2024-11-19 08:45:28.935590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.171 [2024-11-19 08:45:28.975497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.431 [2024-11-19 08:45:29.129506] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.431 [2024-11-19 08:45:29.129579] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.431 [2024-11-19 08:45:29.193951] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:07.431 [2024-11-19 08:45:29.194321] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:07.431 [2024-11-19 08:45:29.194555] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:07.692 [2024-11-19 08:45:29.496492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.496541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.692 [2024-11-19 08:45:29.496554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:07.692 [2024-11-19 08:45:29.496562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.496608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.496629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.692 [2024-11-19 08:45:29.496637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:07.692 [2024-11-19 08:45:29.496648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.496668] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.692 [2024-11-19 08:45:29.496890] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.692 [2024-11-19 08:45:29.496925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.496933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.692 [2024-11-19 08:45:29.496942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:07.692 [2024-11-19 08:45:29.496949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.499334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.692 [2024-11-19 08:45:29.502898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.502931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.692 [2024-11-19 08:45:29.502957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:24:07.692 [2024-11-19 08:45:29.502965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.503029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.503039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.692 [2024-11-19 08:45:29.503047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:07.692 [2024-11-19 08:45:29.503056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.515229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.515254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.692 [2024-11-19 08:45:29.515264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.156 ms 00:24:07.692 [2024-11-19 08:45:29.515277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.515372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.515384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.692 [2024-11-19 08:45:29.515392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:07.692 [2024-11-19 08:45:29.515399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.515458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.515469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.692 [2024-11-19 08:45:29.515477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:07.692 [2024-11-19 08:45:29.515484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.515508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.692 [2024-11-19 08:45:29.518187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.518215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.692 [2024-11-19 08:45:29.518231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:24:07.692 [2024-11-19 08:45:29.518242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.518274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.692 [2024-11-19 08:45:29.518282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.692 [2024-11-19 08:45:29.518289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:07.692 [2024-11-19 08:45:29.518296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.692 [2024-11-19 08:45:29.518317] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.692 [2024-11-19 08:45:29.518340] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.692 [2024-11-19 08:45:29.518384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.693 [2024-11-19 08:45:29.518405] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.693 [2024-11-19 08:45:29.518515] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.693 [2024-11-19 08:45:29.518530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.693 [2024-11-19 08:45:29.518541] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.693 [2024-11-19 08:45:29.518558] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.693 [2024-11-19 08:45:29.518574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.693 [2024-11-19 08:45:29.518583] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:07.693 [2024-11-19 08:45:29.518591] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.693 [2024-11-19 08:45:29.518602] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.693 [2024-11-19 08:45:29.518610] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.693 [2024-11-19 08:45:29.518625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.693 [2024-11-19 08:45:29.518638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.693 [2024-11-19 08:45:29.518646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:24:07.693 [2024-11-19 08:45:29.518653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.693 [2024-11-19 08:45:29.518738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.693 [2024-11-19 08:45:29.518773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.693 [2024-11-19 08:45:29.518781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:07.693 [2024-11-19 08:45:29.518788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.693 [2024-11-19 08:45:29.518886] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.693 [2024-11-19 08:45:29.518911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.693 [2024-11-19 08:45:29.518925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.693 [2024-11-19 08:45:29.518933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.518949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.693 [2024-11-19 08:45:29.518956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.518969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:07.693 [2024-11-19 08:45:29.518976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.693 [2024-11-19 08:45:29.518983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:07.693 [2024-11-19 08:45:29.518989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.693 [2024-11-19 08:45:29.518996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.693 [2024-11-19 08:45:29.519002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:07.693 [2024-11-19 08:45:29.519009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.693 [2024-11-19 08:45:29.519015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.693 [2024-11-19 08:45:29.519022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:07.693 [2024-11-19 08:45:29.519028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.693 [2024-11-19 08:45:29.519041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.693 [2024-11-19 08:45:29.519060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.693 [2024-11-19 08:45:29.519090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.693 [2024-11-19 08:45:29.519109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.693 [2024-11-19 08:45:29.519127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.693 [2024-11-19 08:45:29.519146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.693 [2024-11-19 08:45:29.519157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.693 [2024-11-19 08:45:29.519163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:07.693 [2024-11-19 08:45:29.519169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.693 [2024-11-19 08:45:29.519175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.693 [2024-11-19 08:45:29.519184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:07.693 [2024-11-19 08:45:29.519190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.693 [2024-11-19 08:45:29.519202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:07.693 [2024-11-19 08:45:29.519208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519214] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.693 [2024-11-19 08:45:29.519229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.693 [2024-11-19 08:45:29.519237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.693 [2024-11-19 08:45:29.519251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.693 [2024-11-19 08:45:29.519257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.693 [2024-11-19 08:45:29.519263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.693 [2024-11-19 08:45:29.519269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.693 [2024-11-19 08:45:29.519276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.693 [2024-11-19 08:45:29.519282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.693 [2024-11-19 08:45:29.519290] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.693 [2024-11-19 08:45:29.519306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:07.693 [2024-11-19 08:45:29.519322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:07.693 [2024-11-19 08:45:29.519330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:07.693 [2024-11-19 08:45:29.519338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:07.693 [2024-11-19 08:45:29.519345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:07.693 [2024-11-19 08:45:29.519352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:07.693 [2024-11-19 08:45:29.519359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:07.693 [2024-11-19 08:45:29.519376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:07.693 [2024-11-19 08:45:29.519383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:07.693 [2024-11-19 08:45:29.519390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:07.693 [2024-11-19 08:45:29.519425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.693 [2024-11-19 08:45:29.519436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.693 [2024-11-19 08:45:29.519455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.693 [2024-11-19 08:45:29.519461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.693 [2024-11-19 08:45:29.519468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.693 [2024-11-19 08:45:29.519475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.693 [2024-11-19 08:45:29.519484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.693 [2024-11-19 08:45:29.519498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:24:07.693 [2024-11-19 08:45:29.519506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.693 [2024-11-19 08:45:29.540986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.541017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.694 [2024-11-19 08:45:29.541044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.457 ms 00:24:07.694 [2024-11-19 08:45:29.541052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.541129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.541155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.694 [2024-11-19 08:45:29.541168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:07.694 [2024-11-19 08:45:29.541184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.572302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.572402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.694 [2024-11-19 08:45:29.572453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.108 ms 00:24:07.694 [2024-11-19 08:45:29.572497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.572651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.572747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.694 [2024-11-19 08:45:29.572815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.694 [2024-11-19 08:45:29.572844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.574112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.574180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.694 [2024-11-19 08:45:29.574214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:24:07.694 [2024-11-19 08:45:29.574242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.574653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.574757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.694 [2024-11-19 08:45:29.574831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:24:07.694 [2024-11-19 08:45:29.574859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.590028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.590074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.694 [2024-11-19 08:45:29.590092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.127 ms 00:24:07.694 [2024-11-19 08:45:29.590105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.694 [2024-11-19 08:45:29.594809] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:07.694 [2024-11-19 08:45:29.594854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.694 [2024-11-19 08:45:29.594874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.694 [2024-11-19 08:45:29.594888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.694 [2024-11-19 08:45:29.594901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.552 ms 00:24:07.694 [2024-11-19 08:45:29.594913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.610483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.610546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.953 [2024-11-19 08:45:29.610557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.540 ms 00:24:07.953 [2024-11-19 08:45:29.610566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.612749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.612779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.953 [2024-11-19 08:45:29.612804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:24:07.953 [2024-11-19 08:45:29.612811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.614492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.614518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.953 [2024-11-19 08:45:29.614526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:24:07.953 [2024-11-19 08:45:29.614533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.614849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.614878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.953 [2024-11-19 08:45:29.614887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:24:07.953 [2024-11-19 08:45:29.614894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.646832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.646898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.953 [2024-11-19 08:45:29.646914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.934 ms 00:24:07.953 [2024-11-19 08:45:29.646929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.653220] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:07.953 [2024-11-19 08:45:29.657456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.657479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.953 [2024-11-19 08:45:29.657506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.502 ms 00:24:07.953 [2024-11-19 08:45:29.657514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.657609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.657622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.953 [2024-11-19 08:45:29.657632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:07.953 [2024-11-19 08:45:29.657643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.657726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.657744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.953 [2024-11-19 08:45:29.657757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:07.953 [2024-11-19 08:45:29.657764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.657794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.953 [2024-11-19 08:45:29.657806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.953 [2024-11-19 08:45:29.657814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:07.953 [2024-11-19 08:45:29.657821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.953 [2024-11-19 08:45:29.657885] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.954 [2024-11-19 08:45:29.657899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.954 [2024-11-19 08:45:29.657907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.954 [2024-11-19 08:45:29.657916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:07.954 [2024-11-19 08:45:29.657924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.954 [2024-11-19 08:45:29.662707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.954 [2024-11-19 08:45:29.662752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.954 [2024-11-19 08:45:29.662763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:24:07.954 [2024-11-19 08:45:29.662771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.954 [2024-11-19 08:45:29.662841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.954 [2024-11-19 08:45:29.662855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.954 [2024-11-19 08:45:29.662863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:07.954 [2024-11-19 08:45:29.662870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.954 [2024-11-19 08:45:29.664402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.725 ms, result 0 00:24:08.890  [2024-11-19T08:45:31.732Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-19T08:45:33.108Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-19T08:45:33.674Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-19T08:45:35.051Z] Copying: 101/1024 [MB] (26 MBps) [2024-11-19T08:45:36.011Z] Copying: 126/1024 [MB] (25 MBps) [2024-11-19T08:45:36.956Z] Copying: 151/1024 [MB] (24 MBps) [2024-11-19T08:45:37.893Z] Copying: 175/1024 [MB] (24 MBps) [2024-11-19T08:45:38.829Z] Copying: 201/1024 [MB] (25 MBps) [2024-11-19T08:45:39.767Z] Copying: 226/1024 [MB] (25 MBps) [2024-11-19T08:45:40.707Z] Copying: 252/1024 [MB] (25 MBps) [2024-11-19T08:45:42.089Z] Copying: 279/1024 [MB] (27 MBps) [2024-11-19T08:45:42.659Z] Copying: 306/1024 [MB] (26 MBps) [2024-11-19T08:45:44.041Z] Copying: 333/1024 [MB] (27 MBps) [2024-11-19T08:45:44.981Z] Copying: 361/1024 [MB] (27 MBps) [2024-11-19T08:45:45.922Z] Copying: 387/1024 [MB] (25 MBps) [2024-11-19T08:45:46.863Z] Copying: 412/1024 [MB] (25 MBps) [2024-11-19T08:45:47.804Z] Copying: 437/1024 [MB] (24 MBps) [2024-11-19T08:45:48.744Z] Copying: 462/1024 [MB] (24 MBps) [2024-11-19T08:45:49.684Z] Copying: 487/1024 [MB] (25 MBps) [2024-11-19T08:45:51.066Z] Copying: 511/1024 [MB] (24 MBps) [2024-11-19T08:45:51.637Z] Copying: 536/1024 [MB] (25 MBps) [2024-11-19T08:45:53.020Z] Copying: 561/1024 [MB] (24 MBps) [2024-11-19T08:45:53.961Z] Copying: 586/1024 [MB] (24 MBps) [2024-11-19T08:45:54.901Z] Copying: 610/1024 [MB] (24 MBps) [2024-11-19T08:45:55.841Z] Copying: 636/1024 [MB] (25 MBps) [2024-11-19T08:45:56.804Z] Copying: 661/1024 [MB] (24 MBps) [2024-11-19T08:45:57.746Z] Copying: 686/1024 [MB] (24 MBps) [2024-11-19T08:45:58.688Z] Copying: 711/1024 [MB] (25 MBps) [2024-11-19T08:45:59.627Z] Copying: 737/1024 [MB] (25 MBps) [2024-11-19T08:46:01.009Z] Copying: 761/1024 [MB] (24 MBps) [2024-11-19T08:46:01.949Z] Copying: 787/1024 [MB] (25 MBps) [2024-11-19T08:46:02.889Z] Copying: 812/1024 [MB] (25 MBps) [2024-11-19T08:46:03.829Z] Copying: 836/1024 [MB] (24 MBps) [2024-11-19T08:46:04.770Z] Copying: 860/1024 [MB] (24 MBps) [2024-11-19T08:46:05.710Z] Copying: 884/1024 [MB] (23 MBps) [2024-11-19T08:46:06.652Z] Copying: 910/1024 [MB] (25 MBps) [2024-11-19T08:46:08.034Z] Copying: 935/1024 [MB] (25 MBps) [2024-11-19T08:46:08.604Z] Copying: 960/1024 [MB] (24 MBps) [2024-11-19T08:46:09.985Z] Copying: 986/1024 [MB] (25 MBps) [2024-11-19T08:46:10.926Z] Copying: 1011/1024 [MB] (24 MBps) [2024-11-19T08:46:10.926Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-19T08:46:10.926Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 08:46:10.820620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.820707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.019 [2024-11-19 08:46:10.820733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.019 [2024-11-19 08:46:10.820742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.019 [2024-11-19 08:46:10.822183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.019 [2024-11-19 08:46:10.824722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.824771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.019 [2024-11-19 08:46:10.824782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:24:49.019 [2024-11-19 08:46:10.824789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.019 [2024-11-19 08:46:10.834309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.834349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.019 [2024-11-19 08:46:10.834361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.739 ms 00:24:49.019 [2024-11-19 08:46:10.834368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.019 [2024-11-19 08:46:10.857365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.857409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.019 [2024-11-19 08:46:10.857429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.025 ms 00:24:49.019 [2024-11-19 08:46:10.857439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.019 [2024-11-19 08:46:10.862247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.862278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.019 [2024-11-19 08:46:10.862287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.784 ms 00:24:49.019 [2024-11-19 08:46:10.862294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.019 [2024-11-19 08:46:10.863812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.863842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.019 [2024-11-19 08:46:10.863851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:24:49.019 [2024-11-19 08:46:10.863859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.019 [2024-11-19 08:46:10.868128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.019 [2024-11-19 08:46:10.868167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.019 [2024-11-19 08:46:10.868193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:24:49.019 [2024-11-19 08:46:10.868200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.281 [2024-11-19 08:46:10.982307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.281 [2024-11-19 08:46:10.982343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:49.281 [2024-11-19 08:46:10.982355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.300 ms 00:24:49.281 [2024-11-19 08:46:10.982364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.281 [2024-11-19 08:46:10.984410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.281 [2024-11-19 08:46:10.984440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:49.281 [2024-11-19 08:46:10.984449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:24:49.281 [2024-11-19 08:46:10.984456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.281 [2024-11-19 08:46:10.985970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.281 [2024-11-19 08:46:10.986002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:49.281 [2024-11-19 08:46:10.986025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:24:49.281 [2024-11-19 08:46:10.986033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.281 [2024-11-19 08:46:10.987224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.281 [2024-11-19 08:46:10.987257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:49.281 [2024-11-19 08:46:10.987265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:24:49.281 [2024-11-19 08:46:10.987271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.281 [2024-11-19 08:46:10.988397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.281 [2024-11-19 08:46:10.988430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:49.281 [2024-11-19 08:46:10.988438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:24:49.281 [2024-11-19 08:46:10.988445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.281 [2024-11-19 08:46:10.988467] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.281 [2024-11-19 08:46:10.988480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103424 / 261120 wr_cnt: 1 state: open 00:24:49.281 [2024-11-19 08:46:10.988496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.281 [2024-11-19 08:46:10.988563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.988998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.282 [2024-11-19 08:46:10.989242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.283 [2024-11-19 08:46:10.989258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.283 [2024-11-19 08:46:10.989265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22820913-26bd-4288-afb2-5561b9b6e2aa 00:24:49.283 [2024-11-19 08:46:10.989284] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103424 00:24:49.283 [2024-11-19 08:46:10.989291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104384 00:24:49.283 [2024-11-19 08:46:10.989297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103424 00:24:49.283 [2024-11-19 08:46:10.989312] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:49.283 [2024-11-19 08:46:10.989320] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.283 [2024-11-19 08:46:10.989327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.283 [2024-11-19 08:46:10.989333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.283 [2024-11-19 08:46:10.989340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.283 [2024-11-19 08:46:10.989357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.283 [2024-11-19 08:46:10.989364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.283 [2024-11-19 08:46:10.989377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.283 [2024-11-19 08:46:10.989386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:24:49.283 [2024-11-19 08:46:10.989396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:10.991051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.283 [2024-11-19 08:46:10.991070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.283 [2024-11-19 08:46:10.991078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:24:49.283 [2024-11-19 08:46:10.991085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:10.991186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.283 [2024-11-19 08:46:10.991206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.283 [2024-11-19 08:46:10.991214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:49.283 [2024-11-19 08:46:10.991221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:10.996957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:10.996982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.283 [2024-11-19 08:46:10.996991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:10.996999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:10.997051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:10.997062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.283 [2024-11-19 08:46:10.997069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:10.997084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:10.997141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:10.997152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.283 [2024-11-19 08:46:10.997159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:10.997166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:10.997179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:10.997187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.283 [2024-11-19 08:46:10.997198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:10.997204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.010166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.010206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.283 [2024-11-19 08:46:11.010215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.010223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.283 [2024-11-19 08:46:11.018127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.283 [2024-11-19 08:46:11.018195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.283 [2024-11-19 08:46:11.018238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.283 [2024-11-19 08:46:11.018347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.283 [2024-11-19 08:46:11.018406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.283 [2024-11-19 08:46:11.018486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.283 [2024-11-19 08:46:11.018554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.283 [2024-11-19 08:46:11.018561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.283 [2024-11-19 08:46:11.018570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.283 [2024-11-19 08:46:11.018686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 199.649 ms, result 0 00:24:50.223 00:24:50.223 00:24:50.223 08:46:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:52.131 08:46:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:52.131 [2024-11-19 08:46:13.639840] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:52.131 [2024-11-19 08:46:13.640064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90303 ] 00:24:52.131 [2024-11-19 08:46:13.796575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.131 [2024-11-19 08:46:13.820320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.131 [2024-11-19 08:46:13.921796] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.131 [2024-11-19 08:46:13.921866] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.393 [2024-11-19 08:46:14.076551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.076677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:52.393 [2024-11-19 08:46:14.076695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.393 [2024-11-19 08:46:14.076703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.076789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.076802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.393 [2024-11-19 08:46:14.076818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:52.393 [2024-11-19 08:46:14.076832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.076858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:52.393 [2024-11-19 08:46:14.077064] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:52.393 [2024-11-19 08:46:14.077079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.077086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.393 [2024-11-19 08:46:14.077094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:24:52.393 [2024-11-19 08:46:14.077113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.078478] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:52.393 [2024-11-19 08:46:14.080850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.080892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:52.393 [2024-11-19 08:46:14.080903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:24:52.393 [2024-11-19 08:46:14.080910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.080974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.080984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:52.393 [2024-11-19 08:46:14.080993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:52.393 [2024-11-19 08:46:14.081000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.087599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.087701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.393 [2024-11-19 08:46:14.087713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.563 ms 00:24:52.393 [2024-11-19 08:46:14.087737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.087824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.087835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.393 [2024-11-19 08:46:14.087843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:52.393 [2024-11-19 08:46:14.087856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.087902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.087912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:52.393 [2024-11-19 08:46:14.087928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:52.393 [2024-11-19 08:46:14.087935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.087961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:52.393 [2024-11-19 08:46:14.089536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.089568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.393 [2024-11-19 08:46:14.089576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:24:52.393 [2024-11-19 08:46:14.089583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.089609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.089617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:52.393 [2024-11-19 08:46:14.089626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:52.393 [2024-11-19 08:46:14.089633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.089668] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:52.393 [2024-11-19 08:46:14.089686] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:52.393 [2024-11-19 08:46:14.089738] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:52.393 [2024-11-19 08:46:14.089771] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:52.393 [2024-11-19 08:46:14.089854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:52.393 [2024-11-19 08:46:14.089865] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:52.393 [2024-11-19 08:46:14.089874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:52.393 [2024-11-19 08:46:14.089888] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:52.393 [2024-11-19 08:46:14.089897] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:52.393 [2024-11-19 08:46:14.089907] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:52.393 [2024-11-19 08:46:14.089914] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:52.393 [2024-11-19 08:46:14.089933] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:52.393 [2024-11-19 08:46:14.089947] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:52.393 [2024-11-19 08:46:14.089956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.393 [2024-11-19 08:46:14.089963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:52.393 [2024-11-19 08:46:14.089971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:24:52.393 [2024-11-19 08:46:14.089978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.393 [2024-11-19 08:46:14.090043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.394 [2024-11-19 08:46:14.090060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:52.394 [2024-11-19 08:46:14.090067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:52.394 [2024-11-19 08:46:14.090074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.394 [2024-11-19 08:46:14.090170] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:52.394 [2024-11-19 08:46:14.090188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:52.394 [2024-11-19 08:46:14.090197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:52.394 [2024-11-19 08:46:14.090226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:52.394 [2024-11-19 08:46:14.090248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.394 [2024-11-19 08:46:14.090263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:52.394 [2024-11-19 08:46:14.090269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:52.394 [2024-11-19 08:46:14.090275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.394 [2024-11-19 08:46:14.090282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:52.394 [2024-11-19 08:46:14.090289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:52.394 [2024-11-19 08:46:14.090295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:52.394 [2024-11-19 08:46:14.090308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:52.394 [2024-11-19 08:46:14.090329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:52.394 [2024-11-19 08:46:14.090349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:52.394 [2024-11-19 08:46:14.090367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:52.394 [2024-11-19 08:46:14.090385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:52.394 [2024-11-19 08:46:14.090404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.394 [2024-11-19 08:46:14.090415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:52.394 [2024-11-19 08:46:14.090426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:52.394 [2024-11-19 08:46:14.090433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.394 [2024-11-19 08:46:14.090439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:52.394 [2024-11-19 08:46:14.090446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:52.394 [2024-11-19 08:46:14.090453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:52.394 [2024-11-19 08:46:14.090465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:52.394 [2024-11-19 08:46:14.090471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:52.394 [2024-11-19 08:46:14.090484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:52.394 [2024-11-19 08:46:14.090502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.394 [2024-11-19 08:46:14.090517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:52.394 [2024-11-19 08:46:14.090524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:52.394 [2024-11-19 08:46:14.090531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:52.394 [2024-11-19 08:46:14.090537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:52.394 [2024-11-19 08:46:14.090545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:52.394 [2024-11-19 08:46:14.090552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:52.394 [2024-11-19 08:46:14.090559] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:52.394 [2024-11-19 08:46:14.090567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:52.394 [2024-11-19 08:46:14.090582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:52.394 [2024-11-19 08:46:14.090589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:52.394 [2024-11-19 08:46:14.090596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:52.394 [2024-11-19 08:46:14.090602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:52.394 [2024-11-19 08:46:14.090608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:52.394 [2024-11-19 08:46:14.090616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:52.394 [2024-11-19 08:46:14.090623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:52.394 [2024-11-19 08:46:14.090629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:52.394 [2024-11-19 08:46:14.090636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:52.394 [2024-11-19 08:46:14.090682] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:52.394 [2024-11-19 08:46:14.090689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:52.394 [2024-11-19 08:46:14.090712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:52.394 [2024-11-19 08:46:14.090834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:52.394 [2024-11-19 08:46:14.090864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:52.394 [2024-11-19 08:46:14.090893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.394 [2024-11-19 08:46:14.090914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:52.394 [2024-11-19 08:46:14.090933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:24:52.394 [2024-11-19 08:46:14.090951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.394 [2024-11-19 08:46:14.102681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.394 [2024-11-19 08:46:14.102797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.394 [2024-11-19 08:46:14.102827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.676 ms 00:24:52.394 [2024-11-19 08:46:14.102849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.102934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.102963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:52.395 [2024-11-19 08:46:14.102986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:52.395 [2024-11-19 08:46:14.103028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.130660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.130936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.395 [2024-11-19 08:46:14.131104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.593 ms 00:24:52.395 [2024-11-19 08:46:14.131206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.131326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.131360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.395 [2024-11-19 08:46:14.131389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:52.395 [2024-11-19 08:46:14.131415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.132253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.132326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.395 [2024-11-19 08:46:14.132357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:24:52.395 [2024-11-19 08:46:14.132383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.132771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.132826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.395 [2024-11-19 08:46:14.132853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:52.395 [2024-11-19 08:46:14.132942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.143776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.143841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.395 [2024-11-19 08:46:14.143864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.787 ms 00:24:52.395 [2024-11-19 08:46:14.143881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.147768] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:52.395 [2024-11-19 08:46:14.147831] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:52.395 [2024-11-19 08:46:14.147856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.147875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:52.395 [2024-11-19 08:46:14.147893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:24:52.395 [2024-11-19 08:46:14.147910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.165305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.165348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:52.395 [2024-11-19 08:46:14.165361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.359 ms 00:24:52.395 [2024-11-19 08:46:14.165369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.167287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.167321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:52.395 [2024-11-19 08:46:14.167337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:24:52.395 [2024-11-19 08:46:14.167345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.168950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.169041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:52.395 [2024-11-19 08:46:14.169053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:24:52.395 [2024-11-19 08:46:14.169061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.169375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.169392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:52.395 [2024-11-19 08:46:14.169401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:24:52.395 [2024-11-19 08:46:14.169408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.189440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.189508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:52.395 [2024-11-19 08:46:14.189524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.046 ms 00:24:52.395 [2024-11-19 08:46:14.189541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.195405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:52.395 [2024-11-19 08:46:14.197981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.198032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:52.395 [2024-11-19 08:46:14.198042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.407 ms 00:24:52.395 [2024-11-19 08:46:14.198050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.198107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.198116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:52.395 [2024-11-19 08:46:14.198132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.395 [2024-11-19 08:46:14.198145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.199623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.199670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:52.395 [2024-11-19 08:46:14.199687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:24:52.395 [2024-11-19 08:46:14.199703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.199767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.199780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:52.395 [2024-11-19 08:46:14.199787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:52.395 [2024-11-19 08:46:14.199794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.199830] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:52.395 [2024-11-19 08:46:14.199847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.199860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:52.395 [2024-11-19 08:46:14.199868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:52.395 [2024-11-19 08:46:14.199878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.203701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.203744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:52.395 [2024-11-19 08:46:14.203755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.813 ms 00:24:52.395 [2024-11-19 08:46:14.203762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.203830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.395 [2024-11-19 08:46:14.203840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:52.395 [2024-11-19 08:46:14.203856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:52.395 [2024-11-19 08:46:14.203864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.395 [2024-11-19 08:46:14.204920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.139 ms, result 0 00:24:53.777  [2024-11-19T08:46:16.652Z] Copying: 1324/1048576 [kB] (1324 kBps) [2024-11-19T08:46:17.594Z] Copying: 11092/1048576 [kB] (9768 kBps) [2024-11-19T08:46:18.537Z] Copying: 44/1024 [MB] (33 MBps) [2024-11-19T08:46:19.475Z] Copying: 77/1024 [MB] (33 MBps) [2024-11-19T08:46:20.415Z] Copying: 113/1024 [MB] (35 MBps) [2024-11-19T08:46:21.357Z] Copying: 149/1024 [MB] (35 MBps) [2024-11-19T08:46:22.739Z] Copying: 184/1024 [MB] (35 MBps) [2024-11-19T08:46:23.679Z] Copying: 218/1024 [MB] (34 MBps) [2024-11-19T08:46:24.620Z] Copying: 253/1024 [MB] (35 MBps) [2024-11-19T08:46:25.560Z] Copying: 289/1024 [MB] (35 MBps) [2024-11-19T08:46:26.501Z] Copying: 324/1024 [MB] (35 MBps) [2024-11-19T08:46:27.441Z] Copying: 359/1024 [MB] (34 MBps) [2024-11-19T08:46:28.382Z] Copying: 394/1024 [MB] (34 MBps) [2024-11-19T08:46:29.347Z] Copying: 430/1024 [MB] (35 MBps) [2024-11-19T08:46:30.731Z] Copying: 465/1024 [MB] (34 MBps) [2024-11-19T08:46:31.668Z] Copying: 499/1024 [MB] (34 MBps) [2024-11-19T08:46:32.612Z] Copying: 533/1024 [MB] (34 MBps) [2024-11-19T08:46:33.553Z] Copying: 568/1024 [MB] (34 MBps) [2024-11-19T08:46:34.494Z] Copying: 601/1024 [MB] (33 MBps) [2024-11-19T08:46:35.435Z] Copying: 635/1024 [MB] (33 MBps) [2024-11-19T08:46:36.385Z] Copying: 670/1024 [MB] (35 MBps) [2024-11-19T08:46:37.348Z] Copying: 706/1024 [MB] (36 MBps) [2024-11-19T08:46:38.729Z] Copying: 742/1024 [MB] (35 MBps) [2024-11-19T08:46:39.669Z] Copying: 777/1024 [MB] (35 MBps) [2024-11-19T08:46:40.608Z] Copying: 807/1024 [MB] (30 MBps) [2024-11-19T08:46:41.548Z] Copying: 839/1024 [MB] (31 MBps) [2024-11-19T08:46:42.489Z] Copying: 871/1024 [MB] (31 MBps) [2024-11-19T08:46:43.430Z] Copying: 903/1024 [MB] (31 MBps) [2024-11-19T08:46:44.371Z] Copying: 935/1024 [MB] (31 MBps) [2024-11-19T08:46:45.311Z] Copying: 966/1024 [MB] (31 MBps) [2024-11-19T08:46:46.251Z] Copying: 998/1024 [MB] (31 MBps) [2024-11-19T08:46:46.513Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-19 08:46:46.329835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-11-19 08:46:46.329934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:24.606 [2024-11-19 08:46:46.329955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:24.606 [2024-11-19 08:46:46.329968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.606 [2024-11-19 08:46:46.329999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:24.606 [2024-11-19 08:46:46.331175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-11-19 08:46:46.331224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:24.606 [2024-11-19 08:46:46.331246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:25:24.606 [2024-11-19 08:46:46.331259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.606 [2024-11-19 08:46:46.331584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-11-19 08:46:46.331787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:24.606 [2024-11-19 08:46:46.331816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:25:24.606 [2024-11-19 08:46:46.331833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.346498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.346574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:24.607 [2024-11-19 08:46:46.346594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.659 ms 00:25:24.607 [2024-11-19 08:46:46.346620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.352429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.352488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:24.607 [2024-11-19 08:46:46.352502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.766 ms 00:25:24.607 [2024-11-19 08:46:46.352511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.354406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.354534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:24.607 [2024-11-19 08:46:46.354552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.821 ms 00:25:24.607 [2024-11-19 08:46:46.354562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.359678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.359756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:24.607 [2024-11-19 08:46:46.359771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.084 ms 00:25:24.607 [2024-11-19 08:46:46.359790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.361906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.361985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:24.607 [2024-11-19 08:46:46.362015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:25:24.607 [2024-11-19 08:46:46.362025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.364731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.364797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:24.607 [2024-11-19 08:46:46.364811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:25:24.607 [2024-11-19 08:46:46.364839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.366778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.366826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:24.607 [2024-11-19 08:46:46.366837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 00:25:24.607 [2024-11-19 08:46:46.366846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.368263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.368387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:24.607 [2024-11-19 08:46:46.368404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:25:24.607 [2024-11-19 08:46:46.368413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.369993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.607 [2024-11-19 08:46:46.370040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:24.607 [2024-11-19 08:46:46.370054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:25:24.607 [2024-11-19 08:46:46.370063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.607 [2024-11-19 08:46:46.370096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:24.607 [2024-11-19 08:46:46.370115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:24.607 [2024-11-19 08:46:46.370127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:24.607 [2024-11-19 08:46:46.370137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:24.607 [2024-11-19 08:46:46.370615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.370994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:24.608 [2024-11-19 08:46:46.371109] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:24.608 [2024-11-19 08:46:46.371118] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22820913-26bd-4288-afb2-5561b9b6e2aa 00:25:24.608 [2024-11-19 08:46:46.371132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:24.608 [2024-11-19 08:46:46.371146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161216 00:25:24.608 [2024-11-19 08:46:46.371165] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159232 00:25:24.608 [2024-11-19 08:46:46.371184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:25:24.608 [2024-11-19 08:46:46.371193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:24.608 [2024-11-19 08:46:46.371205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:24.608 [2024-11-19 08:46:46.371214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:24.608 [2024-11-19 08:46:46.371221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:24.608 [2024-11-19 08:46:46.371229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:24.608 [2024-11-19 08:46:46.371239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.608 [2024-11-19 08:46:46.371249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:24.608 [2024-11-19 08:46:46.371258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:25:24.608 [2024-11-19 08:46:46.371266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.373608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.608 [2024-11-19 08:46:46.373637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:24.608 [2024-11-19 08:46:46.373648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:25:24.608 [2024-11-19 08:46:46.373666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.373848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.608 [2024-11-19 08:46:46.373862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:24.608 [2024-11-19 08:46:46.373873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:25:24.608 [2024-11-19 08:46:46.373885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.381434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.381469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.608 [2024-11-19 08:46:46.381482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.381493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.381543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.381553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.608 [2024-11-19 08:46:46.381563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.381578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.381628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.381640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.608 [2024-11-19 08:46:46.381649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.381659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.381678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.381687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.608 [2024-11-19 08:46:46.381697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.381706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.398806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.398956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.608 [2024-11-19 08:46:46.398974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.398984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.408558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.408598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.608 [2024-11-19 08:46:46.408612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.408628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.408682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.608 [2024-11-19 08:46:46.408692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.608 [2024-11-19 08:46:46.408702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.608 [2024-11-19 08:46:46.408711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.608 [2024-11-19 08:46:46.408757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.609 [2024-11-19 08:46:46.408784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.609 [2024-11-19 08:46:46.408794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.609 [2024-11-19 08:46:46.408803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.609 [2024-11-19 08:46:46.408908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.609 [2024-11-19 08:46:46.408923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.609 [2024-11-19 08:46:46.408933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.609 [2024-11-19 08:46:46.408942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.609 [2024-11-19 08:46:46.408985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.609 [2024-11-19 08:46:46.408996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:24.609 [2024-11-19 08:46:46.409007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.609 [2024-11-19 08:46:46.409027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.609 [2024-11-19 08:46:46.409080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.609 [2024-11-19 08:46:46.409095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.609 [2024-11-19 08:46:46.409105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.609 [2024-11-19 08:46:46.409114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.609 [2024-11-19 08:46:46.409172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.609 [2024-11-19 08:46:46.409185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.609 [2024-11-19 08:46:46.409194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.609 [2024-11-19 08:46:46.409204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.609 [2024-11-19 08:46:46.409339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.634 ms, result 0 00:25:24.869 00:25:24.869 00:25:24.869 08:46:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:26.780 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:26.780 08:46:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:26.780 [2024-11-19 08:46:48.354203] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:25:26.780 [2024-11-19 08:46:48.354323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90661 ] 00:25:26.780 [2024-11-19 08:46:48.510888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.780 [2024-11-19 08:46:48.538920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.780 [2024-11-19 08:46:48.643355] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:26.780 [2024-11-19 08:46:48.643438] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:27.041 [2024-11-19 08:46:48.798438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.798492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:27.041 [2024-11-19 08:46:48.798510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:27.041 [2024-11-19 08:46:48.798519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.798578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.798590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.041 [2024-11-19 08:46:48.798600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:27.041 [2024-11-19 08:46:48.798608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.798645] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:27.041 [2024-11-19 08:46:48.798904] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:27.041 [2024-11-19 08:46:48.798926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.798936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.041 [2024-11-19 08:46:48.798945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:27.041 [2024-11-19 08:46:48.798958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.800426] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:27.041 [2024-11-19 08:46:48.803168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.803209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:27.041 [2024-11-19 08:46:48.803222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:25:27.041 [2024-11-19 08:46:48.803231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.803301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.803313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:27.041 [2024-11-19 08:46:48.803324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:27.041 [2024-11-19 08:46:48.803332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.810295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.810379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.041 [2024-11-19 08:46:48.810412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:25:27.041 [2024-11-19 08:46:48.810443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.810562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.041 [2024-11-19 08:46:48.810608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.041 [2024-11-19 08:46:48.810662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:27.041 [2024-11-19 08:46:48.810698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.041 [2024-11-19 08:46:48.810793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.042 [2024-11-19 08:46:48.810846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:27.042 [2024-11-19 08:46:48.810882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:27.042 [2024-11-19 08:46:48.810922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.042 [2024-11-19 08:46:48.810986] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:27.042 [2024-11-19 08:46:48.812671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.042 [2024-11-19 08:46:48.812768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.042 [2024-11-19 08:46:48.812808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:25:27.042 [2024-11-19 08:46:48.812821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.042 [2024-11-19 08:46:48.812860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.042 [2024-11-19 08:46:48.812897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:27.042 [2024-11-19 08:46:48.812907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:27.042 [2024-11-19 08:46:48.812921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.042 [2024-11-19 08:46:48.812949] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:27.042 [2024-11-19 08:46:48.812974] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:27.042 [2024-11-19 08:46:48.813027] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:27.042 [2024-11-19 08:46:48.813046] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:27.042 [2024-11-19 08:46:48.813133] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:27.042 [2024-11-19 08:46:48.813146] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:27.042 [2024-11-19 08:46:48.813158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:27.042 [2024-11-19 08:46:48.813175] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813187] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:27.042 [2024-11-19 08:46:48.813206] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:27.042 [2024-11-19 08:46:48.813215] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:27.042 [2024-11-19 08:46:48.813224] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:27.042 [2024-11-19 08:46:48.813234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.042 [2024-11-19 08:46:48.813253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:27.042 [2024-11-19 08:46:48.813268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:25:27.042 [2024-11-19 08:46:48.813277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.042 [2024-11-19 08:46:48.813347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.042 [2024-11-19 08:46:48.813361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:27.042 [2024-11-19 08:46:48.813371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:27.042 [2024-11-19 08:46:48.813380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.042 [2024-11-19 08:46:48.813485] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:27.042 [2024-11-19 08:46:48.813507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:27.042 [2024-11-19 08:46:48.813517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:27.042 [2024-11-19 08:46:48.813546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:27.042 [2024-11-19 08:46:48.813573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:27.042 [2024-11-19 08:46:48.813652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:27.042 [2024-11-19 08:46:48.813661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:27.042 [2024-11-19 08:46:48.813670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:27.042 [2024-11-19 08:46:48.813680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:27.042 [2024-11-19 08:46:48.813689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:27.042 [2024-11-19 08:46:48.813697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:27.042 [2024-11-19 08:46:48.813714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:27.042 [2024-11-19 08:46:48.813778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:27.042 [2024-11-19 08:46:48.813806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:27.042 [2024-11-19 08:46:48.813839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:27.042 [2024-11-19 08:46:48.813877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.042 [2024-11-19 08:46:48.813894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:27.042 [2024-11-19 08:46:48.813902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:27.042 [2024-11-19 08:46:48.813917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:27.042 [2024-11-19 08:46:48.813925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:27.042 [2024-11-19 08:46:48.813933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:27.042 [2024-11-19 08:46:48.813941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:27.042 [2024-11-19 08:46:48.813950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:27.042 [2024-11-19 08:46:48.813958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:27.042 [2024-11-19 08:46:48.813977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:27.042 [2024-11-19 08:46:48.813985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.813994] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:27.042 [2024-11-19 08:46:48.814003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:27.042 [2024-11-19 08:46:48.814015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:27.042 [2024-11-19 08:46:48.814024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.042 [2024-11-19 08:46:48.814033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:27.042 [2024-11-19 08:46:48.814041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:27.042 [2024-11-19 08:46:48.814049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:27.042 [2024-11-19 08:46:48.814057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:27.042 [2024-11-19 08:46:48.814064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:27.042 [2024-11-19 08:46:48.814074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:27.042 [2024-11-19 08:46:48.814086] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:27.042 [2024-11-19 08:46:48.814097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:27.042 [2024-11-19 08:46:48.814108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:27.042 [2024-11-19 08:46:48.814117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:27.042 [2024-11-19 08:46:48.814130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:27.042 [2024-11-19 08:46:48.814140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:27.042 [2024-11-19 08:46:48.814149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:27.042 [2024-11-19 08:46:48.814158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:27.042 [2024-11-19 08:46:48.814167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:27.042 [2024-11-19 08:46:48.814175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:27.042 [2024-11-19 08:46:48.814183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:27.042 [2024-11-19 08:46:48.814193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:27.042 [2024-11-19 08:46:48.814202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:27.043 [2024-11-19 08:46:48.814223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:27.043 [2024-11-19 08:46:48.814232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:27.043 [2024-11-19 08:46:48.814240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:27.043 [2024-11-19 08:46:48.814250] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:27.043 [2024-11-19 08:46:48.814268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:27.043 [2024-11-19 08:46:48.814288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:27.043 [2024-11-19 08:46:48.814298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:27.043 [2024-11-19 08:46:48.814309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:27.043 [2024-11-19 08:46:48.814320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:27.043 [2024-11-19 08:46:48.814330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.814341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:27.043 [2024-11-19 08:46:48.814352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:25:27.043 [2024-11-19 08:46:48.814361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.826652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.826696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.043 [2024-11-19 08:46:48.826709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.259 ms 00:25:27.043 [2024-11-19 08:46:48.826732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.826828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.826863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:27.043 [2024-11-19 08:46:48.826874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:27.043 [2024-11-19 08:46:48.826884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.852477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.852590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.043 [2024-11-19 08:46:48.852638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.573 ms 00:25:27.043 [2024-11-19 08:46:48.852673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.852841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.852906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.043 [2024-11-19 08:46:48.852958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:27.043 [2024-11-19 08:46:48.852995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.853894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.854154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.043 [2024-11-19 08:46:48.854206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:25:27.043 [2024-11-19 08:46:48.854281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.854683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.854776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.043 [2024-11-19 08:46:48.854803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:25:27.043 [2024-11-19 08:46:48.854826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.865134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.865198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.043 [2024-11-19 08:46:48.865218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.275 ms 00:25:27.043 [2024-11-19 08:46:48.865234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.868586] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:27.043 [2024-11-19 08:46:48.868757] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:27.043 [2024-11-19 08:46:48.868785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.868801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:27.043 [2024-11-19 08:46:48.868817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:25:27.043 [2024-11-19 08:46:48.868832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.885031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.885088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:27.043 [2024-11-19 08:46:48.885103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.150 ms 00:25:27.043 [2024-11-19 08:46:48.885114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.887098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.887137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:27.043 [2024-11-19 08:46:48.887149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:25:27.043 [2024-11-19 08:46:48.887158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.888715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.888769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:27.043 [2024-11-19 08:46:48.888780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:25:27.043 [2024-11-19 08:46:48.888798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.889081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.889100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:27.043 [2024-11-19 08:46:48.889112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:25:27.043 [2024-11-19 08:46:48.889135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.910009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.910191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:27.043 [2024-11-19 08:46:48.910210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.879 ms 00:25:27.043 [2024-11-19 08:46:48.910221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.916120] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:27.043 [2024-11-19 08:46:48.918984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.919044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:27.043 [2024-11-19 08:46:48.919060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.722 ms 00:25:27.043 [2024-11-19 08:46:48.919070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.919157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.919169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:27.043 [2024-11-19 08:46:48.919180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:27.043 [2024-11-19 08:46:48.919189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.920003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.920020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:27.043 [2024-11-19 08:46:48.920031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:25:27.043 [2024-11-19 08:46:48.920044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.920081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.920103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:27.043 [2024-11-19 08:46:48.920113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:27.043 [2024-11-19 08:46:48.920122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.920163] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:27.043 [2024-11-19 08:46:48.920177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.920186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:27.043 [2024-11-19 08:46:48.920196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:27.043 [2024-11-19 08:46:48.920212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.924072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.924111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:27.043 [2024-11-19 08:46:48.924123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:25:27.043 [2024-11-19 08:46:48.924133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.924208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.043 [2024-11-19 08:46:48.924219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:27.043 [2024-11-19 08:46:48.924229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:27.043 [2024-11-19 08:46:48.924248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.043 [2024-11-19 08:46:48.925347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.703 ms, result 0 00:25:28.426  [2024-11-19T08:46:51.273Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-19T08:46:52.214Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-19T08:46:53.153Z] Copying: 74/1024 [MB] (24 MBps) [2024-11-19T08:46:54.093Z] Copying: 99/1024 [MB] (24 MBps) [2024-11-19T08:46:55.474Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-19T08:46:56.420Z] Copying: 149/1024 [MB] (25 MBps) [2024-11-19T08:46:57.387Z] Copying: 174/1024 [MB] (25 MBps) [2024-11-19T08:46:58.326Z] Copying: 199/1024 [MB] (25 MBps) [2024-11-19T08:46:59.266Z] Copying: 225/1024 [MB] (26 MBps) [2024-11-19T08:47:00.206Z] Copying: 252/1024 [MB] (26 MBps) [2024-11-19T08:47:01.148Z] Copying: 279/1024 [MB] (26 MBps) [2024-11-19T08:47:02.089Z] Copying: 306/1024 [MB] (27 MBps) [2024-11-19T08:47:03.471Z] Copying: 334/1024 [MB] (27 MBps) [2024-11-19T08:47:04.411Z] Copying: 361/1024 [MB] (27 MBps) [2024-11-19T08:47:05.352Z] Copying: 388/1024 [MB] (27 MBps) [2024-11-19T08:47:06.291Z] Copying: 416/1024 [MB] (27 MBps) [2024-11-19T08:47:07.231Z] Copying: 444/1024 [MB] (27 MBps) [2024-11-19T08:47:08.172Z] Copying: 473/1024 [MB] (28 MBps) [2024-11-19T08:47:09.112Z] Copying: 500/1024 [MB] (27 MBps) [2024-11-19T08:47:10.054Z] Copying: 528/1024 [MB] (27 MBps) [2024-11-19T08:47:11.436Z] Copying: 555/1024 [MB] (27 MBps) [2024-11-19T08:47:12.378Z] Copying: 582/1024 [MB] (27 MBps) [2024-11-19T08:47:13.320Z] Copying: 610/1024 [MB] (27 MBps) [2024-11-19T08:47:14.260Z] Copying: 638/1024 [MB] (27 MBps) [2024-11-19T08:47:15.200Z] Copying: 666/1024 [MB] (28 MBps) [2024-11-19T08:47:16.192Z] Copying: 695/1024 [MB] (28 MBps) [2024-11-19T08:47:17.133Z] Copying: 723/1024 [MB] (28 MBps) [2024-11-19T08:47:18.074Z] Copying: 750/1024 [MB] (27 MBps) [2024-11-19T08:47:19.458Z] Copying: 778/1024 [MB] (27 MBps) [2024-11-19T08:47:20.028Z] Copying: 806/1024 [MB] (28 MBps) [2024-11-19T08:47:21.412Z] Copying: 834/1024 [MB] (27 MBps) [2024-11-19T08:47:22.352Z] Copying: 862/1024 [MB] (27 MBps) [2024-11-19T08:47:23.293Z] Copying: 889/1024 [MB] (26 MBps) [2024-11-19T08:47:24.234Z] Copying: 915/1024 [MB] (26 MBps) [2024-11-19T08:47:25.174Z] Copying: 942/1024 [MB] (27 MBps) [2024-11-19T08:47:26.114Z] Copying: 968/1024 [MB] (26 MBps) [2024-11-19T08:47:27.054Z] Copying: 994/1024 [MB] (26 MBps) [2024-11-19T08:47:27.314Z] Copying: 1020/1024 [MB] (25 MBps) [2024-11-19T08:47:27.314Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-19 08:47:27.293451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.408 [2024-11-19 08:47:27.293680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:05.408 [2024-11-19 08:47:27.293803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:05.408 [2024-11-19 08:47:27.293879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.408 [2024-11-19 08:47:27.294035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:05.408 [2024-11-19 08:47:27.296450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.408 [2024-11-19 08:47:27.296573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:05.408 [2024-11-19 08:47:27.296643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.279 ms 00:26:05.408 [2024-11-19 08:47:27.296694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.408 [2024-11-19 08:47:27.297273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.408 [2024-11-19 08:47:27.297391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:05.408 [2024-11-19 08:47:27.297418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:26:05.408 [2024-11-19 08:47:27.297437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.408 [2024-11-19 08:47:27.302236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.408 [2024-11-19 08:47:27.302300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:05.408 [2024-11-19 08:47:27.302340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:26:05.408 [2024-11-19 08:47:27.302374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.408 [2024-11-19 08:47:27.310459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.408 [2024-11-19 08:47:27.310536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:05.408 [2024-11-19 08:47:27.310571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.053 ms 00:26:05.408 [2024-11-19 08:47:27.310595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.312574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.312653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:05.676 [2024-11-19 08:47:27.312692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:26:05.676 [2024-11-19 08:47:27.312731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.318609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.319009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:05.676 [2024-11-19 08:47:27.319153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.802 ms 00:26:05.676 [2024-11-19 08:47:27.319261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.322231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.322430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:05.676 [2024-11-19 08:47:27.322538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:26:05.676 [2024-11-19 08:47:27.322695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.326335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.326521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:05.676 [2024-11-19 08:47:27.326631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:26:05.676 [2024-11-19 08:47:27.326745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.329536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.329712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:05.676 [2024-11-19 08:47:27.329776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:26:05.676 [2024-11-19 08:47:27.329805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.332142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.332222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:05.676 [2024-11-19 08:47:27.332253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:26:05.676 [2024-11-19 08:47:27.332286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.334100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.676 [2024-11-19 08:47:27.334218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:05.676 [2024-11-19 08:47:27.334283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:26:05.676 [2024-11-19 08:47:27.334331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.676 [2024-11-19 08:47:27.334428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:05.676 [2024-11-19 08:47:27.334524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:05.676 [2024-11-19 08:47:27.334640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:05.676 [2024-11-19 08:47:27.334769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.334867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.334972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.334997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.335997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:05.676 [2024-11-19 08:47:27.336139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.336930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.337006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:05.677 [2024-11-19 08:47:27.337181] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:05.677 [2024-11-19 08:47:27.337267] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22820913-26bd-4288-afb2-5561b9b6e2aa 00:26:05.677 [2024-11-19 08:47:27.337367] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:05.677 [2024-11-19 08:47:27.337459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:05.677 [2024-11-19 08:47:27.337526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:05.677 [2024-11-19 08:47:27.337592] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:05.677 [2024-11-19 08:47:27.337657] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:05.677 [2024-11-19 08:47:27.337741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:05.677 [2024-11-19 08:47:27.337849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:05.677 [2024-11-19 08:47:27.337951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:05.677 [2024-11-19 08:47:27.338016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:05.677 [2024-11-19 08:47:27.338085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.677 [2024-11-19 08:47:27.338149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:05.677 [2024-11-19 08:47:27.338252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:26:05.677 [2024-11-19 08:47:27.338315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.341013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.677 [2024-11-19 08:47:27.341139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:05.677 [2024-11-19 08:47:27.341213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:26:05.677 [2024-11-19 08:47:27.341304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.341542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.677 [2024-11-19 08:47:27.341623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:05.677 [2024-11-19 08:47:27.341687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:26:05.677 [2024-11-19 08:47:27.341770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.350160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.350256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.677 [2024-11-19 08:47:27.350310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.350344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.350462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.350546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.677 [2024-11-19 08:47:27.350592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.350640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.350778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.350843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.677 [2024-11-19 08:47:27.350894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.350959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.350993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.351014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.677 [2024-11-19 08:47:27.351027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.351039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.367891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.368008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.677 [2024-11-19 08:47:27.368023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.368045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.377648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.377687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.677 [2024-11-19 08:47:27.377699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.377707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.377806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.377817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.677 [2024-11-19 08:47:27.377826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.377833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.377857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.377867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.677 [2024-11-19 08:47:27.377881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.377889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.377970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.377982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.677 [2024-11-19 08:47:27.377992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.377999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.378032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.378042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:05.677 [2024-11-19 08:47:27.378052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.378064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.378137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.378149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.677 [2024-11-19 08:47:27.378157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.378164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.378222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.677 [2024-11-19 08:47:27.378240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.677 [2024-11-19 08:47:27.378254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.677 [2024-11-19 08:47:27.378262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.677 [2024-11-19 08:47:27.378392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.092 ms, result 0 00:26:05.939 00:26:05.939 00:26:05.939 08:47:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:07.851 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88998 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 88998 ']' 00:26:07.851 Process with pid 88998 is not found 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 88998 00:26:07.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88998) - No such process 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 88998 is not found' 00:26:07.851 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:08.112 Remove shared memory files 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:08.112 ************************************ 00:26:08.112 END TEST ftl_dirty_shutdown 00:26:08.112 ************************************ 00:26:08.112 00:26:08.112 real 3m17.629s 00:26:08.112 user 3m45.656s 00:26:08.112 sys 0m29.764s 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.112 08:47:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:08.112 08:47:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:08.112 08:47:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:08.112 08:47:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.112 08:47:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:08.112 ************************************ 00:26:08.112 START TEST ftl_upgrade_shutdown 00:26:08.112 ************************************ 00:26:08.112 08:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:08.112 * Looking for test storage... 00:26:08.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.112 08:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:08.112 08:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:08.112 08:47:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:08.372 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:08.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.373 --rc genhtml_branch_coverage=1 00:26:08.373 --rc genhtml_function_coverage=1 00:26:08.373 --rc genhtml_legend=1 00:26:08.373 --rc geninfo_all_blocks=1 00:26:08.373 --rc geninfo_unexecuted_blocks=1 00:26:08.373 00:26:08.373 ' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:08.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.373 --rc genhtml_branch_coverage=1 00:26:08.373 --rc genhtml_function_coverage=1 00:26:08.373 --rc genhtml_legend=1 00:26:08.373 --rc geninfo_all_blocks=1 00:26:08.373 --rc geninfo_unexecuted_blocks=1 00:26:08.373 00:26:08.373 ' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:08.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.373 --rc genhtml_branch_coverage=1 00:26:08.373 --rc genhtml_function_coverage=1 00:26:08.373 --rc genhtml_legend=1 00:26:08.373 --rc geninfo_all_blocks=1 00:26:08.373 --rc geninfo_unexecuted_blocks=1 00:26:08.373 00:26:08.373 ' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:08.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.373 --rc genhtml_branch_coverage=1 00:26:08.373 --rc genhtml_function_coverage=1 00:26:08.373 --rc genhtml_legend=1 00:26:08.373 --rc geninfo_all_blocks=1 00:26:08.373 --rc geninfo_unexecuted_blocks=1 00:26:08.373 00:26:08.373 ' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91158 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91158 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91158 ']' 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.373 08:47:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:08.373 [2024-11-19 08:47:30.236344] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:08.373 [2024-11-19 08:47:30.236989] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91158 ] 00:26:08.633 [2024-11-19 08:47:30.395512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.633 [2024-11-19 08:47:30.421490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:09.202 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:09.203 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:09.460 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:09.720 { 00:26:09.720 "name": "basen1", 00:26:09.720 "aliases": [ 00:26:09.720 "e3709919-b418-4aa0-99a6-30ea7a4a3f80" 00:26:09.720 ], 00:26:09.720 "product_name": "NVMe disk", 00:26:09.720 "block_size": 4096, 00:26:09.720 "num_blocks": 1310720, 00:26:09.720 "uuid": "e3709919-b418-4aa0-99a6-30ea7a4a3f80", 00:26:09.720 "numa_id": -1, 00:26:09.720 "assigned_rate_limits": { 00:26:09.720 "rw_ios_per_sec": 0, 00:26:09.720 "rw_mbytes_per_sec": 0, 00:26:09.720 "r_mbytes_per_sec": 0, 00:26:09.720 "w_mbytes_per_sec": 0 00:26:09.720 }, 00:26:09.720 "claimed": true, 00:26:09.720 "claim_type": "read_many_write_one", 00:26:09.720 "zoned": false, 00:26:09.720 "supported_io_types": { 00:26:09.720 "read": true, 00:26:09.720 "write": true, 00:26:09.720 "unmap": true, 00:26:09.720 "flush": true, 00:26:09.720 "reset": true, 00:26:09.720 "nvme_admin": true, 00:26:09.720 "nvme_io": true, 00:26:09.720 "nvme_io_md": false, 00:26:09.720 "write_zeroes": true, 00:26:09.720 "zcopy": false, 00:26:09.720 "get_zone_info": false, 00:26:09.720 "zone_management": false, 00:26:09.720 "zone_append": false, 00:26:09.720 "compare": true, 00:26:09.720 "compare_and_write": false, 00:26:09.720 "abort": true, 00:26:09.720 "seek_hole": false, 00:26:09.720 "seek_data": false, 00:26:09.720 "copy": true, 00:26:09.720 "nvme_iov_md": false 00:26:09.720 }, 00:26:09.720 "driver_specific": { 00:26:09.720 "nvme": [ 00:26:09.720 { 00:26:09.720 "pci_address": "0000:00:11.0", 00:26:09.720 "trid": { 00:26:09.720 "trtype": "PCIe", 00:26:09.720 "traddr": "0000:00:11.0" 00:26:09.720 }, 00:26:09.720 "ctrlr_data": { 00:26:09.720 "cntlid": 0, 00:26:09.720 "vendor_id": "0x1b36", 00:26:09.720 "model_number": "QEMU NVMe Ctrl", 00:26:09.720 "serial_number": "12341", 00:26:09.720 "firmware_revision": "8.0.0", 00:26:09.720 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:09.720 "oacs": { 00:26:09.720 "security": 0, 00:26:09.720 "format": 1, 00:26:09.720 "firmware": 0, 00:26:09.720 "ns_manage": 1 00:26:09.720 }, 00:26:09.720 "multi_ctrlr": false, 00:26:09.720 "ana_reporting": false 00:26:09.720 }, 00:26:09.720 "vs": { 00:26:09.720 "nvme_version": "1.4" 00:26:09.720 }, 00:26:09.720 "ns_data": { 00:26:09.720 "id": 1, 00:26:09.720 "can_share": false 00:26:09.720 } 00:26:09.720 } 00:26:09.720 ], 00:26:09.720 "mp_policy": "active_passive" 00:26:09.720 } 00:26:09.720 } 00:26:09.720 ]' 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:09.720 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:09.721 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:09.721 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:09.981 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=da7d8577-48fe-4bbd-8e83-946d3ecb133b 00:26:09.981 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:09.981 08:47:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da7d8577-48fe-4bbd-8e83-946d3ecb133b 00:26:10.241 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:10.500 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=21d6dd6b-c3ac-4682-9dc8-624a952c8ee9 00:26:10.500 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 21d6dd6b-c3ac-4682-9dc8-624a952c8ee9 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=41db4450-e2f8-46de-a2b9-76a846ca4195 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 41db4450-e2f8-46de-a2b9-76a846ca4195 ]] 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 41db4450-e2f8-46de-a2b9-76a846ca4195 5120 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=41db4450-e2f8-46de-a2b9-76a846ca4195 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 41db4450-e2f8-46de-a2b9-76a846ca4195 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=41db4450-e2f8-46de-a2b9-76a846ca4195 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41db4450-e2f8-46de-a2b9-76a846ca4195 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:10.760 { 00:26:10.760 "name": "41db4450-e2f8-46de-a2b9-76a846ca4195", 00:26:10.760 "aliases": [ 00:26:10.760 "lvs/basen1p0" 00:26:10.760 ], 00:26:10.760 "product_name": "Logical Volume", 00:26:10.760 "block_size": 4096, 00:26:10.760 "num_blocks": 5242880, 00:26:10.760 "uuid": "41db4450-e2f8-46de-a2b9-76a846ca4195", 00:26:10.760 "assigned_rate_limits": { 00:26:10.760 "rw_ios_per_sec": 0, 00:26:10.760 "rw_mbytes_per_sec": 0, 00:26:10.760 "r_mbytes_per_sec": 0, 00:26:10.760 "w_mbytes_per_sec": 0 00:26:10.760 }, 00:26:10.760 "claimed": false, 00:26:10.760 "zoned": false, 00:26:10.760 "supported_io_types": { 00:26:10.760 "read": true, 00:26:10.760 "write": true, 00:26:10.760 "unmap": true, 00:26:10.760 "flush": false, 00:26:10.760 "reset": true, 00:26:10.760 "nvme_admin": false, 00:26:10.760 "nvme_io": false, 00:26:10.760 "nvme_io_md": false, 00:26:10.760 "write_zeroes": true, 00:26:10.760 "zcopy": false, 00:26:10.760 "get_zone_info": false, 00:26:10.760 "zone_management": false, 00:26:10.760 "zone_append": false, 00:26:10.760 "compare": false, 00:26:10.760 "compare_and_write": false, 00:26:10.760 "abort": false, 00:26:10.760 "seek_hole": true, 00:26:10.760 "seek_data": true, 00:26:10.760 "copy": false, 00:26:10.760 "nvme_iov_md": false 00:26:10.760 }, 00:26:10.760 "driver_specific": { 00:26:10.760 "lvol": { 00:26:10.760 "lvol_store_uuid": "21d6dd6b-c3ac-4682-9dc8-624a952c8ee9", 00:26:10.760 "base_bdev": "basen1", 00:26:10.760 "thin_provision": true, 00:26:10.760 "num_allocated_clusters": 0, 00:26:10.760 "snapshot": false, 00:26:10.760 "clone": false, 00:26:10.760 "esnap_clone": false 00:26:10.760 } 00:26:10.760 } 00:26:10.760 } 00:26:10.760 ]' 00:26:10.760 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:11.020 08:47:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:11.280 08:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:11.280 08:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:11.280 08:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:11.541 08:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:11.541 08:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:11.541 08:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 41db4450-e2f8-46de-a2b9-76a846ca4195 -c cachen1p0 --l2p_dram_limit 2 00:26:11.541 [2024-11-19 08:47:33.357560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.357651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:11.541 [2024-11-19 08:47:33.357668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:11.541 [2024-11-19 08:47:33.357677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.357734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.357746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:11.541 [2024-11-19 08:47:33.357754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:11.541 [2024-11-19 08:47:33.357768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.357791] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:11.541 [2024-11-19 08:47:33.357990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:11.541 [2024-11-19 08:47:33.358010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.358019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:11.541 [2024-11-19 08:47:33.358027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:26:11.541 [2024-11-19 08:47:33.358038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.358067] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 779195a2-14ef-4d40-b764-59632bb4fea9 00:26:11.541 [2024-11-19 08:47:33.359407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.359428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:11.541 [2024-11-19 08:47:33.359440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:11.541 [2024-11-19 08:47:33.359447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.366902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.366964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:11.541 [2024-11-19 08:47:33.366992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.422 ms 00:26:11.541 [2024-11-19 08:47:33.367011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.367123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.367216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:11.541 [2024-11-19 08:47:33.367251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:11.541 [2024-11-19 08:47:33.367285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.367363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.367401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:11.541 [2024-11-19 08:47:33.367450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:11.541 [2024-11-19 08:47:33.367470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.367546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:11.541 [2024-11-19 08:47:33.369288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.369365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:11.541 [2024-11-19 08:47:33.369394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.754 ms 00:26:11.541 [2024-11-19 08:47:33.369415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.369483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.369506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:11.541 [2024-11-19 08:47:33.369565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:11.541 [2024-11-19 08:47:33.369600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.369639] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:11.541 [2024-11-19 08:47:33.369807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:11.541 [2024-11-19 08:47:33.369858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:11.541 [2024-11-19 08:47:33.369901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:11.541 [2024-11-19 08:47:33.369945] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:11.541 [2024-11-19 08:47:33.369994] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:11.541 [2024-11-19 08:47:33.370040] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:11.541 [2024-11-19 08:47:33.370074] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:11.541 [2024-11-19 08:47:33.370108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:11.541 [2024-11-19 08:47:33.370138] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:11.541 [2024-11-19 08:47:33.370167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.370200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:11.541 [2024-11-19 08:47:33.370230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:26:11.541 [2024-11-19 08:47:33.370251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.370332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.541 [2024-11-19 08:47:33.370368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:11.541 [2024-11-19 08:47:33.370395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:11.541 [2024-11-19 08:47:33.370416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.541 [2024-11-19 08:47:33.370507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:11.541 [2024-11-19 08:47:33.370542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:11.541 [2024-11-19 08:47:33.370569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:11.541 [2024-11-19 08:47:33.370603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.541 [2024-11-19 08:47:33.370636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:11.541 [2024-11-19 08:47:33.370665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:11.541 [2024-11-19 08:47:33.370694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:11.541 [2024-11-19 08:47:33.370732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:11.541 [2024-11-19 08:47:33.370782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:11.542 [2024-11-19 08:47:33.370812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.370837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:11.542 [2024-11-19 08:47:33.370865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:11.542 [2024-11-19 08:47:33.370900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.370935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:11.542 [2024-11-19 08:47:33.370966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:11.542 [2024-11-19 08:47:33.370995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:11.542 [2024-11-19 08:47:33.371044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:11.542 [2024-11-19 08:47:33.371078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:11.542 [2024-11-19 08:47:33.371137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:11.542 [2024-11-19 08:47:33.371167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:11.542 [2024-11-19 08:47:33.371186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:11.542 [2024-11-19 08:47:33.371207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:11.542 [2024-11-19 08:47:33.371243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:11.542 [2024-11-19 08:47:33.371269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:11.542 [2024-11-19 08:47:33.371288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:11.542 [2024-11-19 08:47:33.371323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:11.542 [2024-11-19 08:47:33.371352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:11.542 [2024-11-19 08:47:33.371384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:11.542 [2024-11-19 08:47:33.371412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:11.542 [2024-11-19 08:47:33.371432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:11.542 [2024-11-19 08:47:33.371454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:11.542 [2024-11-19 08:47:33.371486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:11.542 [2024-11-19 08:47:33.371539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:11.542 [2024-11-19 08:47:33.371566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:11.542 [2024-11-19 08:47:33.371607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:11.542 [2024-11-19 08:47:33.371706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:11.542 [2024-11-19 08:47:33.371742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371775] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:11.542 [2024-11-19 08:47:33.371807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:11.542 [2024-11-19 08:47:33.371831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:11.542 [2024-11-19 08:47:33.371864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:11.542 [2024-11-19 08:47:33.371899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:11.542 [2024-11-19 08:47:33.371926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:11.542 [2024-11-19 08:47:33.371956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:11.542 [2024-11-19 08:47:33.372002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:11.542 [2024-11-19 08:47:33.372023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:11.542 [2024-11-19 08:47:33.372051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:11.542 [2024-11-19 08:47:33.372084] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:11.542 [2024-11-19 08:47:33.372126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:11.542 [2024-11-19 08:47:33.372213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:11.542 [2024-11-19 08:47:33.372371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:11.542 [2024-11-19 08:47:33.372411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:11.542 [2024-11-19 08:47:33.372459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:11.542 [2024-11-19 08:47:33.372503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:11.542 [2024-11-19 08:47:33.372834] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:11.542 [2024-11-19 08:47:33.372885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.372963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:11.542 [2024-11-19 08:47:33.373000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:11.542 [2024-11-19 08:47:33.373065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:11.542 [2024-11-19 08:47:33.373107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:11.542 [2024-11-19 08:47:33.373162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.542 [2024-11-19 08:47:33.373191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:11.542 [2024-11-19 08:47:33.373227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.702 ms 00:26:11.542 [2024-11-19 08:47:33.373256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.542 [2024-11-19 08:47:33.373350] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:11.542 [2024-11-19 08:47:33.373396] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:15.748 [2024-11-19 08:47:37.130567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.130711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:15.748 [2024-11-19 08:47:37.130753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3764.459 ms 00:26:15.748 [2024-11-19 08:47:37.130774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.141518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.141636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:15.748 [2024-11-19 08:47:37.141669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.662 ms 00:26:15.748 [2024-11-19 08:47:37.141690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.141779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.141818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:15.748 [2024-11-19 08:47:37.141860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:15.748 [2024-11-19 08:47:37.141869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.152219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.152262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:15.748 [2024-11-19 08:47:37.152275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.320 ms 00:26:15.748 [2024-11-19 08:47:37.152283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.152318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.152336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:15.748 [2024-11-19 08:47:37.152346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:15.748 [2024-11-19 08:47:37.152353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.152836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.152849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:15.748 [2024-11-19 08:47:37.152860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:26:15.748 [2024-11-19 08:47:37.152867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.152904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.152924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:15.748 [2024-11-19 08:47:37.152933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:15.748 [2024-11-19 08:47:37.152941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.159789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.159822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:15.748 [2024-11-19 08:47:37.159835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.841 ms 00:26:15.748 [2024-11-19 08:47:37.159851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.166811] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:15.748 [2024-11-19 08:47:37.167899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.167926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:15.748 [2024-11-19 08:47:37.167943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.999 ms 00:26:15.748 [2024-11-19 08:47:37.167952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.196417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.196638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:15.748 [2024-11-19 08:47:37.196684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.489 ms 00:26:15.748 [2024-11-19 08:47:37.196738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.196940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.196974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:15.748 [2024-11-19 08:47:37.196996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:26:15.748 [2024-11-19 08:47:37.197044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.201862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.201941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:15.748 [2024-11-19 08:47:37.201967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.748 ms 00:26:15.748 [2024-11-19 08:47:37.201996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.206089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.206147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:15.748 [2024-11-19 08:47:37.206164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.024 ms 00:26:15.748 [2024-11-19 08:47:37.206180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.206619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.206643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:15.748 [2024-11-19 08:47:37.206660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:26:15.748 [2024-11-19 08:47:37.206679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.244695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.748 [2024-11-19 08:47:37.244760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:15.748 [2024-11-19 08:47:37.244774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.008 ms 00:26:15.748 [2024-11-19 08:47:37.244787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.748 [2024-11-19 08:47:37.249038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.749 [2024-11-19 08:47:37.249081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:15.749 [2024-11-19 08:47:37.249092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.201 ms 00:26:15.749 [2024-11-19 08:47:37.249101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.749 [2024-11-19 08:47:37.252264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.749 [2024-11-19 08:47:37.252304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:15.749 [2024-11-19 08:47:37.252313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.135 ms 00:26:15.749 [2024-11-19 08:47:37.252322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.749 [2024-11-19 08:47:37.255609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.749 [2024-11-19 08:47:37.255652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:15.749 [2024-11-19 08:47:37.255662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.261 ms 00:26:15.749 [2024-11-19 08:47:37.255676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.749 [2024-11-19 08:47:37.255737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.749 [2024-11-19 08:47:37.255753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:15.749 [2024-11-19 08:47:37.255762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:15.749 [2024-11-19 08:47:37.255772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.749 [2024-11-19 08:47:37.255836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.749 [2024-11-19 08:47:37.255849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:15.749 [2024-11-19 08:47:37.255857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:15.749 [2024-11-19 08:47:37.255866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.749 [2024-11-19 08:47:37.256863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3906.370 ms, result 0 00:26:15.749 { 00:26:15.749 "name": "ftl", 00:26:15.749 "uuid": "779195a2-14ef-4d40-b764-59632bb4fea9" 00:26:15.749 } 00:26:15.749 08:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:15.749 [2024-11-19 08:47:37.469155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.749 08:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:16.009 08:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:16.009 [2024-11-19 08:47:37.869117] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:16.009 08:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:16.269 [2024-11-19 08:47:38.053308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:16.269 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:16.529 Fill FTL, iteration 1 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91280 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91280 /var/tmp/spdk.tgt.sock 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91280 ']' 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:16.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.529 08:47:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:16.789 [2024-11-19 08:47:38.516439] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:16.789 [2024-11-19 08:47:38.516640] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91280 ] 00:26:16.789 [2024-11-19 08:47:38.674674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.049 [2024-11-19 08:47:38.712657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.618 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.618 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:17.618 08:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:17.878 ftln1 00:26:17.878 08:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:17.878 08:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91280 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91280 ']' 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91280 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91280 00:26:18.139 killing process with pid 91280 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91280' 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91280 00:26:18.139 08:47:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91280 00:26:18.710 08:47:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:18.710 08:47:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:18.710 [2024-11-19 08:47:40.517937] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:18.710 [2024-11-19 08:47:40.518128] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91312 ] 00:26:18.970 [2024-11-19 08:47:40.672368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.970 [2024-11-19 08:47:40.709908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.352  [2024-11-19T08:47:43.198Z] Copying: 267/1024 [MB] (267 MBps) [2024-11-19T08:47:44.135Z] Copying: 539/1024 [MB] (272 MBps) [2024-11-19T08:47:45.074Z] Copying: 811/1024 [MB] (272 MBps) [2024-11-19T08:47:45.074Z] Copying: 1024/1024 [MB] (average 270 MBps) 00:26:23.167 00:26:23.167 Calculate MD5 checksum, iteration 1 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:23.167 08:47:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:23.427 [2024-11-19 08:47:45.147197] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:23.427 [2024-11-19 08:47:45.147406] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91361 ] 00:26:23.427 [2024-11-19 08:47:45.302619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.688 [2024-11-19 08:47:45.341047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.070  [2024-11-19T08:47:47.237Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-19T08:47:47.808Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:26:25.901 00:26:25.901 08:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:25.901 08:47:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3a8111f2af1b274cf5d4eff1976ef056 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:27.284 Fill FTL, iteration 2 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:27.284 08:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:27.543 [2024-11-19 08:47:49.230813] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:27.543 [2024-11-19 08:47:49.231036] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91411 ] 00:26:27.543 [2024-11-19 08:47:49.385381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.543 [2024-11-19 08:47:49.422888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.924  [2024-11-19T08:47:51.770Z] Copying: 270/1024 [MB] (270 MBps) [2024-11-19T08:47:52.710Z] Copying: 539/1024 [MB] (269 MBps) [2024-11-19T08:47:53.650Z] Copying: 812/1024 [MB] (273 MBps) [2024-11-19T08:47:53.910Z] Copying: 1024/1024 [MB] (average 269 MBps) 00:26:32.003 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:32.003 Calculate MD5 checksum, iteration 2 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:32.003 08:47:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:32.263 [2024-11-19 08:47:53.908167] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:32.263 [2024-11-19 08:47:53.908392] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91460 ] 00:26:32.263 [2024-11-19 08:47:54.063728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.263 [2024-11-19 08:47:54.103701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.848  [2024-11-19T08:47:57.013Z] Copying: 628/1024 [MB] (628 MBps) [2024-11-19T08:47:57.951Z] Copying: 1024/1024 [MB] (average 615 MBps) 00:26:36.044 00:26:36.044 08:47:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:36.044 08:47:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:37.947 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:37.948 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=434c29ff39e341281ad3b57474591588 00:26:37.948 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:37.948 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:37.948 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:37.948 [2024-11-19 08:47:59.527274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.948 [2024-11-19 08:47:59.527331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:37.948 [2024-11-19 08:47:59.527344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:37.948 [2024-11-19 08:47:59.527351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.948 [2024-11-19 08:47:59.527374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.948 [2024-11-19 08:47:59.527386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:37.948 [2024-11-19 08:47:59.527394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:37.948 [2024-11-19 08:47:59.527401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.948 [2024-11-19 08:47:59.527418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.948 [2024-11-19 08:47:59.527425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:37.948 [2024-11-19 08:47:59.527432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:37.948 [2024-11-19 08:47:59.527439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.948 [2024-11-19 08:47:59.527500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:26:37.948 true 00:26:37.948 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:37.948 { 00:26:37.948 "name": "ftl", 00:26:37.948 "properties": [ 00:26:37.948 { 00:26:37.948 "name": "superblock_version", 00:26:37.948 "value": 5, 00:26:37.948 "read-only": true 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "base_device", 00:26:37.948 "bands": [ 00:26:37.948 { 00:26:37.948 "id": 0, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 1, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 2, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 3, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 4, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 5, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 6, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 7, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 8, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 9, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 10, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 11, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 12, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 13, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 14, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 15, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 16, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 17, 00:26:37.948 "state": "FREE", 00:26:37.948 "validity": 0.0 00:26:37.948 } 00:26:37.948 ], 00:26:37.948 "read-only": true 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "cache_device", 00:26:37.948 "type": "bdev", 00:26:37.948 "chunks": [ 00:26:37.948 { 00:26:37.948 "id": 0, 00:26:37.948 "state": "INACTIVE", 00:26:37.948 "utilization": 0.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 1, 00:26:37.948 "state": "CLOSED", 00:26:37.948 "utilization": 1.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 2, 00:26:37.948 "state": "CLOSED", 00:26:37.948 "utilization": 1.0 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 3, 00:26:37.948 "state": "OPEN", 00:26:37.948 "utilization": 0.001953125 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "id": 4, 00:26:37.948 "state": "OPEN", 00:26:37.948 "utilization": 0.0 00:26:37.948 } 00:26:37.948 ], 00:26:37.948 "read-only": true 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "verbose_mode", 00:26:37.948 "value": true, 00:26:37.948 "unit": "", 00:26:37.948 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "prep_upgrade_on_shutdown", 00:26:37.948 "value": false, 00:26:37.948 "unit": "", 00:26:37.948 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:37.948 } 00:26:37.948 ] 00:26:37.948 } 00:26:37.948 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:38.208 [2024-11-19 08:47:59.894865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.208 [2024-11-19 08:47:59.894902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:38.208 [2024-11-19 08:47:59.894912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:38.208 [2024-11-19 08:47:59.894919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.208 [2024-11-19 08:47:59.894938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.208 [2024-11-19 08:47:59.894946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:38.208 [2024-11-19 08:47:59.894953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:38.208 [2024-11-19 08:47:59.894959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.208 [2024-11-19 08:47:59.894974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.208 [2024-11-19 08:47:59.894981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:38.208 [2024-11-19 08:47:59.894988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:38.208 [2024-11-19 08:47:59.894995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.208 [2024-11-19 08:47:59.895041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.165 ms, result 0 00:26:38.208 true 00:26:38.208 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:38.208 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:38.208 08:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:38.208 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:38.208 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:38.208 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:38.468 [2024-11-19 08:48:00.270769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.468 [2024-11-19 08:48:00.270800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:38.468 [2024-11-19 08:48:00.270809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:38.468 [2024-11-19 08:48:00.270816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.468 [2024-11-19 08:48:00.270836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.468 [2024-11-19 08:48:00.270843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:38.468 [2024-11-19 08:48:00.270850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:38.468 [2024-11-19 08:48:00.270856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.468 [2024-11-19 08:48:00.270872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.468 [2024-11-19 08:48:00.270879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:38.468 [2024-11-19 08:48:00.270886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:38.468 [2024-11-19 08:48:00.270892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.468 [2024-11-19 08:48:00.270936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.156 ms, result 0 00:26:38.469 true 00:26:38.469 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:38.729 { 00:26:38.729 "name": "ftl", 00:26:38.729 "properties": [ 00:26:38.729 { 00:26:38.729 "name": "superblock_version", 00:26:38.729 "value": 5, 00:26:38.729 "read-only": true 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "name": "base_device", 00:26:38.729 "bands": [ 00:26:38.729 { 00:26:38.729 "id": 0, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 1, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 2, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 3, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 4, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 5, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 6, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 7, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 8, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 9, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 10, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 11, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 12, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 13, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 14, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 15, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 16, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 17, 00:26:38.729 "state": "FREE", 00:26:38.729 "validity": 0.0 00:26:38.729 } 00:26:38.729 ], 00:26:38.729 "read-only": true 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "name": "cache_device", 00:26:38.729 "type": "bdev", 00:26:38.729 "chunks": [ 00:26:38.729 { 00:26:38.729 "id": 0, 00:26:38.729 "state": "INACTIVE", 00:26:38.729 "utilization": 0.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 1, 00:26:38.729 "state": "CLOSED", 00:26:38.729 "utilization": 1.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 2, 00:26:38.729 "state": "CLOSED", 00:26:38.729 "utilization": 1.0 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 3, 00:26:38.729 "state": "OPEN", 00:26:38.729 "utilization": 0.001953125 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "id": 4, 00:26:38.729 "state": "OPEN", 00:26:38.729 "utilization": 0.0 00:26:38.729 } 00:26:38.729 ], 00:26:38.729 "read-only": true 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "name": "verbose_mode", 00:26:38.729 "value": true, 00:26:38.729 "unit": "", 00:26:38.729 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:38.729 }, 00:26:38.729 { 00:26:38.729 "name": "prep_upgrade_on_shutdown", 00:26:38.729 "value": true, 00:26:38.729 "unit": "", 00:26:38.730 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:38.730 } 00:26:38.730 ] 00:26:38.730 } 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91158 ]] 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91158 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91158 ']' 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91158 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91158 00:26:38.730 killing process with pid 91158 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91158' 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91158 00:26:38.730 08:48:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91158 00:26:38.990 [2024-11-19 08:48:00.639603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:38.990 [2024-11-19 08:48:00.646075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.990 [2024-11-19 08:48:00.646120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:38.990 [2024-11-19 08:48:00.646133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:38.990 [2024-11-19 08:48:00.646140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.990 [2024-11-19 08:48:00.646162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:38.990 [2024-11-19 08:48:00.646815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.990 [2024-11-19 08:48:00.646831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:38.990 [2024-11-19 08:48:00.646840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:26:38.990 [2024-11-19 08:48:00.646857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.597141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.597209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:47.122 [2024-11-19 08:48:07.597224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6963.672 ms 00:26:47.122 [2024-11-19 08:48:07.597231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.598262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.598292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:47.122 [2024-11-19 08:48:07.598302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:26:47.122 [2024-11-19 08:48:07.598309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.599181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.599195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:47.122 [2024-11-19 08:48:07.599209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:26:47.122 [2024-11-19 08:48:07.599219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.601069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.601107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:47.122 [2024-11-19 08:48:07.601117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.803 ms 00:26:47.122 [2024-11-19 08:48:07.601124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.603551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.603669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:47.122 [2024-11-19 08:48:07.603682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.406 ms 00:26:47.122 [2024-11-19 08:48:07.603690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.603765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.603776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:47.122 [2024-11-19 08:48:07.603785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:47.122 [2024-11-19 08:48:07.603792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.605108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.605142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:47.122 [2024-11-19 08:48:07.605152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.304 ms 00:26:47.122 [2024-11-19 08:48:07.605158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.606509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.606543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:47.122 [2024-11-19 08:48:07.606552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.327 ms 00:26:47.122 [2024-11-19 08:48:07.606558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.607767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.607829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:47.122 [2024-11-19 08:48:07.607855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.188 ms 00:26:47.122 [2024-11-19 08:48:07.607874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.608939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.122 [2024-11-19 08:48:07.609001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:47.122 [2024-11-19 08:48:07.609026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.004 ms 00:26:47.122 [2024-11-19 08:48:07.609046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.122 [2024-11-19 08:48:07.609086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:47.122 [2024-11-19 08:48:07.609111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:47.122 [2024-11-19 08:48:07.609141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:47.122 [2024-11-19 08:48:07.609186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:47.122 [2024-11-19 08:48:07.609196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:47.122 [2024-11-19 08:48:07.609303] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:47.122 [2024-11-19 08:48:07.609311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 779195a2-14ef-4d40-b764-59632bb4fea9 00:26:47.122 [2024-11-19 08:48:07.609332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:47.122 [2024-11-19 08:48:07.609339] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:47.123 [2024-11-19 08:48:07.609347] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:47.123 [2024-11-19 08:48:07.609358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:47.123 [2024-11-19 08:48:07.609365] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:47.123 [2024-11-19 08:48:07.609372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:47.123 [2024-11-19 08:48:07.609379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:47.123 [2024-11-19 08:48:07.609386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:47.123 [2024-11-19 08:48:07.609392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:47.123 [2024-11-19 08:48:07.609399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.123 [2024-11-19 08:48:07.609407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:47.123 [2024-11-19 08:48:07.609415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.315 ms 00:26:47.123 [2024-11-19 08:48:07.609421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.611176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.123 [2024-11-19 08:48:07.611199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:47.123 [2024-11-19 08:48:07.611208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.742 ms 00:26:47.123 [2024-11-19 08:48:07.611215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.611313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.123 [2024-11-19 08:48:07.611321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:47.123 [2024-11-19 08:48:07.611329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:26:47.123 [2024-11-19 08:48:07.611336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.617500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.617587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:47.123 [2024-11-19 08:48:07.617600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.617618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.617646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.617654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:47.123 [2024-11-19 08:48:07.617661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.617668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.617734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.617752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:47.123 [2024-11-19 08:48:07.617763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.617770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.617787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.617794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:47.123 [2024-11-19 08:48:07.617802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.617811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.631484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.631599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:47.123 [2024-11-19 08:48:07.631612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.631620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.639690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.639732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:47.123 [2024-11-19 08:48:07.639743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.639750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.639817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.639826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:47.123 [2024-11-19 08:48:07.639834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.639847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.639878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.639886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:47.123 [2024-11-19 08:48:07.639894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.639902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.639970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.639980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:47.123 [2024-11-19 08:48:07.639988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.639996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.640031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.640040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:47.123 [2024-11-19 08:48:07.640048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.640063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.640101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.640118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:47.123 [2024-11-19 08:48:07.640126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.640134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.640208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:47.123 [2024-11-19 08:48:07.640220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:47.123 [2024-11-19 08:48:07.640228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:47.123 [2024-11-19 08:48:07.640235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.123 [2024-11-19 08:48:07.640349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7007.735 ms, result 0 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91630 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91630 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91630 ']' 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.666 08:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:49.666 [2024-11-19 08:48:11.216297] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:49.666 [2024-11-19 08:48:11.216550] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91630 ] 00:26:49.666 [2024-11-19 08:48:11.373490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.666 [2024-11-19 08:48:11.399215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.926 [2024-11-19 08:48:11.707564] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:49.926 [2024-11-19 08:48:11.707633] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:50.187 [2024-11-19 08:48:11.851816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.851858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:50.187 [2024-11-19 08:48:11.851870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:50.187 [2024-11-19 08:48:11.851886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.851930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.851939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:50.187 [2024-11-19 08:48:11.851947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:50.187 [2024-11-19 08:48:11.851956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.851973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:50.187 [2024-11-19 08:48:11.852165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:50.187 [2024-11-19 08:48:11.852185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.852192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:50.187 [2024-11-19 08:48:11.852200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:26:50.187 [2024-11-19 08:48:11.852206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.853569] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:50.187 [2024-11-19 08:48:11.855999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.856031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:50.187 [2024-11-19 08:48:11.856054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.436 ms 00:26:50.187 [2024-11-19 08:48:11.856069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.856122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.856132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:50.187 [2024-11-19 08:48:11.856141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:50.187 [2024-11-19 08:48:11.856147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.862790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.862913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:50.187 [2024-11-19 08:48:11.862926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.595 ms 00:26:50.187 [2024-11-19 08:48:11.862933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.862986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.862999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:50.187 [2024-11-19 08:48:11.863006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:50.187 [2024-11-19 08:48:11.863013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.863065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.187 [2024-11-19 08:48:11.863074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:50.187 [2024-11-19 08:48:11.863085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:50.187 [2024-11-19 08:48:11.863092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.187 [2024-11-19 08:48:11.863116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:50.188 [2024-11-19 08:48:11.864723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.188 [2024-11-19 08:48:11.864771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:50.188 [2024-11-19 08:48:11.864780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.620 ms 00:26:50.188 [2024-11-19 08:48:11.864788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.188 [2024-11-19 08:48:11.864818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.188 [2024-11-19 08:48:11.864831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:50.188 [2024-11-19 08:48:11.864838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:50.188 [2024-11-19 08:48:11.864846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.188 [2024-11-19 08:48:11.864868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:50.188 [2024-11-19 08:48:11.864901] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:50.188 [2024-11-19 08:48:11.864941] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:50.188 [2024-11-19 08:48:11.864957] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:50.188 [2024-11-19 08:48:11.865045] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:50.188 [2024-11-19 08:48:11.865059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:50.188 [2024-11-19 08:48:11.865070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:50.188 [2024-11-19 08:48:11.865080] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865088] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:50.188 [2024-11-19 08:48:11.865103] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:50.188 [2024-11-19 08:48:11.865110] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:50.188 [2024-11-19 08:48:11.865118] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:50.188 [2024-11-19 08:48:11.865126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.188 [2024-11-19 08:48:11.865140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:50.188 [2024-11-19 08:48:11.865151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:26:50.188 [2024-11-19 08:48:11.865159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.188 [2024-11-19 08:48:11.865230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.188 [2024-11-19 08:48:11.865247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:50.188 [2024-11-19 08:48:11.865255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:50.188 [2024-11-19 08:48:11.865262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.188 [2024-11-19 08:48:11.865359] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:50.188 [2024-11-19 08:48:11.865370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:50.188 [2024-11-19 08:48:11.865378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:50.188 [2024-11-19 08:48:11.865403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:50.188 [2024-11-19 08:48:11.865417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:50.188 [2024-11-19 08:48:11.865425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:50.188 [2024-11-19 08:48:11.865434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:50.188 [2024-11-19 08:48:11.865446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:50.188 [2024-11-19 08:48:11.865451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:50.188 [2024-11-19 08:48:11.865464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:50.188 [2024-11-19 08:48:11.865474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:50.188 [2024-11-19 08:48:11.865488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:50.188 [2024-11-19 08:48:11.865494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:50.188 [2024-11-19 08:48:11.865507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:50.188 [2024-11-19 08:48:11.865513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:50.188 [2024-11-19 08:48:11.865526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:50.188 [2024-11-19 08:48:11.865532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:50.188 [2024-11-19 08:48:11.865545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:50.188 [2024-11-19 08:48:11.865551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:50.188 [2024-11-19 08:48:11.865563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:50.188 [2024-11-19 08:48:11.865569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:50.188 [2024-11-19 08:48:11.865584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:50.188 [2024-11-19 08:48:11.865591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:50.188 [2024-11-19 08:48:11.865604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:50.188 [2024-11-19 08:48:11.865622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:50.188 [2024-11-19 08:48:11.865642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:50.188 [2024-11-19 08:48:11.865649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865655] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:50.188 [2024-11-19 08:48:11.865662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:50.188 [2024-11-19 08:48:11.865669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.188 [2024-11-19 08:48:11.865695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:50.188 [2024-11-19 08:48:11.865703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:50.188 [2024-11-19 08:48:11.865709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:50.188 [2024-11-19 08:48:11.865731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:50.188 [2024-11-19 08:48:11.865738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:50.188 [2024-11-19 08:48:11.865745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:50.188 [2024-11-19 08:48:11.865754] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:50.188 [2024-11-19 08:48:11.865763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:50.188 [2024-11-19 08:48:11.865780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:50.188 [2024-11-19 08:48:11.865801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:50.188 [2024-11-19 08:48:11.865809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:50.188 [2024-11-19 08:48:11.865816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:50.188 [2024-11-19 08:48:11.865823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:50.188 [2024-11-19 08:48:11.865868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:50.188 [2024-11-19 08:48:11.865875] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:50.189 [2024-11-19 08:48:11.865883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.189 [2024-11-19 08:48:11.865891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:50.189 [2024-11-19 08:48:11.865898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:50.189 [2024-11-19 08:48:11.865905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:50.189 [2024-11-19 08:48:11.865911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:50.189 [2024-11-19 08:48:11.865930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.189 [2024-11-19 08:48:11.865946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:50.189 [2024-11-19 08:48:11.865956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.624 ms 00:26:50.189 [2024-11-19 08:48:11.865963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.189 [2024-11-19 08:48:11.866014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:50.189 [2024-11-19 08:48:11.866025] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:54.388 [2024-11-19 08:48:15.768507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.768568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:54.388 [2024-11-19 08:48:15.768584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3910.019 ms 00:26:54.388 [2024-11-19 08:48:15.768609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.788341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.788398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:54.388 [2024-11-19 08:48:15.788435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.600 ms 00:26:54.388 [2024-11-19 08:48:15.788445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.788570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.788582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:54.388 [2024-11-19 08:48:15.788591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:54.388 [2024-11-19 08:48:15.788599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.806617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.806757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:54.388 [2024-11-19 08:48:15.806776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.990 ms 00:26:54.388 [2024-11-19 08:48:15.806798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.806874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.806884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:54.388 [2024-11-19 08:48:15.806893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:54.388 [2024-11-19 08:48:15.806906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.807705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.807717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:54.388 [2024-11-19 08:48:15.807725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.736 ms 00:26:54.388 [2024-11-19 08:48:15.807744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.807790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.807801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:54.388 [2024-11-19 08:48:15.807825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:54.388 [2024-11-19 08:48:15.807833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.819903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.819944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:54.388 [2024-11-19 08:48:15.819958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.061 ms 00:26:54.388 [2024-11-19 08:48:15.819989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.823708] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:54.388 [2024-11-19 08:48:15.823750] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:54.388 [2024-11-19 08:48:15.823764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.823772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:54.388 [2024-11-19 08:48:15.823781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.637 ms 00:26:54.388 [2024-11-19 08:48:15.823788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.827562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.827601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:54.388 [2024-11-19 08:48:15.827613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.737 ms 00:26:54.388 [2024-11-19 08:48:15.827636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.829292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.829323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:54.388 [2024-11-19 08:48:15.829333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.618 ms 00:26:54.388 [2024-11-19 08:48:15.829340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.830887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.830943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:54.388 [2024-11-19 08:48:15.831011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.511 ms 00:26:54.388 [2024-11-19 08:48:15.831032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.831340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.831392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:54.388 [2024-11-19 08:48:15.831447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:26:54.388 [2024-11-19 08:48:15.831469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.875696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.388 [2024-11-19 08:48:15.875864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:54.388 [2024-11-19 08:48:15.875897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.266 ms 00:26:54.388 [2024-11-19 08:48:15.875933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.388 [2024-11-19 08:48:15.882191] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:54.389 [2024-11-19 08:48:15.883285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.883346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:54.389 [2024-11-19 08:48:15.883379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.276 ms 00:26:54.389 [2024-11-19 08:48:15.883401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.883514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.883564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:54.389 [2024-11-19 08:48:15.883594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:54.389 [2024-11-19 08:48:15.883626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.883711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.883771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:54.389 [2024-11-19 08:48:15.883800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:54.389 [2024-11-19 08:48:15.883833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.883884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.883914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:54.389 [2024-11-19 08:48:15.883942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:54.389 [2024-11-19 08:48:15.883980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.884045] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:54.389 [2024-11-19 08:48:15.884079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.884107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:54.389 [2024-11-19 08:48:15.884138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:54.389 [2024-11-19 08:48:15.884167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.888879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.888960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:54.389 [2024-11-19 08:48:15.888994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.672 ms 00:26:54.389 [2024-11-19 08:48:15.889033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.889155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:15.889196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:54.389 [2024-11-19 08:48:15.889225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:26:54.389 [2024-11-19 08:48:15.889247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:15.890813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4046.221 ms, result 0 00:26:54.389 [2024-11-19 08:48:15.904156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.389 [2024-11-19 08:48:15.920096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:54.389 [2024-11-19 08:48:15.928246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:54.389 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.389 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:54.389 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:54.389 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:54.389 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:54.389 [2024-11-19 08:48:16.171859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:16.171999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.389 [2024-11-19 08:48:16.172033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:54.389 [2024-11-19 08:48:16.172054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:16.172095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:16.172118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.389 [2024-11-19 08:48:16.172154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:54.389 [2024-11-19 08:48:16.172178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:16.172209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.389 [2024-11-19 08:48:16.172272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.389 [2024-11-19 08:48:16.172294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.389 [2024-11-19 08:48:16.172329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.389 [2024-11-19 08:48:16.172424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.565 ms, result 0 00:26:54.389 true 00:26:54.389 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.648 { 00:26:54.648 "name": "ftl", 00:26:54.648 "properties": [ 00:26:54.648 { 00:26:54.648 "name": "superblock_version", 00:26:54.648 "value": 5, 00:26:54.648 "read-only": true 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "name": "base_device", 00:26:54.648 "bands": [ 00:26:54.648 { 00:26:54.648 "id": 0, 00:26:54.648 "state": "CLOSED", 00:26:54.648 "validity": 1.0 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 1, 00:26:54.648 "state": "CLOSED", 00:26:54.648 "validity": 1.0 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 2, 00:26:54.648 "state": "CLOSED", 00:26:54.648 "validity": 0.007843137254901933 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 3, 00:26:54.648 "state": "FREE", 00:26:54.648 "validity": 0.0 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 4, 00:26:54.648 "state": "FREE", 00:26:54.648 "validity": 0.0 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 5, 00:26:54.648 "state": "FREE", 00:26:54.648 "validity": 0.0 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 6, 00:26:54.648 "state": "FREE", 00:26:54.648 "validity": 0.0 00:26:54.648 }, 00:26:54.648 { 00:26:54.648 "id": 7, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 8, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 9, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 10, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 11, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 12, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 13, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 14, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 15, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 16, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 17, 00:26:54.649 "state": "FREE", 00:26:54.649 "validity": 0.0 00:26:54.649 } 00:26:54.649 ], 00:26:54.649 "read-only": true 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "name": "cache_device", 00:26:54.649 "type": "bdev", 00:26:54.649 "chunks": [ 00:26:54.649 { 00:26:54.649 "id": 0, 00:26:54.649 "state": "INACTIVE", 00:26:54.649 "utilization": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 1, 00:26:54.649 "state": "OPEN", 00:26:54.649 "utilization": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 2, 00:26:54.649 "state": "OPEN", 00:26:54.649 "utilization": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 3, 00:26:54.649 "state": "FREE", 00:26:54.649 "utilization": 0.0 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "id": 4, 00:26:54.649 "state": "FREE", 00:26:54.649 "utilization": 0.0 00:26:54.649 } 00:26:54.649 ], 00:26:54.649 "read-only": true 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "name": "verbose_mode", 00:26:54.649 "value": true, 00:26:54.649 "unit": "", 00:26:54.649 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:54.649 }, 00:26:54.649 { 00:26:54.649 "name": "prep_upgrade_on_shutdown", 00:26:54.649 "value": false, 00:26:54.649 "unit": "", 00:26:54.649 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:54.649 } 00:26:54.649 ] 00:26:54.649 } 00:26:54.649 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:54.649 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:54.649 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.908 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:54.908 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:54.908 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:54.908 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:54.908 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:55.167 Validate MD5 checksum, iteration 1 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:55.167 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:55.167 [2024-11-19 08:48:16.940663] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:26:55.167 [2024-11-19 08:48:16.940881] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91701 ] 00:26:55.426 [2024-11-19 08:48:17.097114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.426 [2024-11-19 08:48:17.121048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.805  [2024-11-19T08:48:19.282Z] Copying: 632/1024 [MB] (632 MBps) [2024-11-19T08:48:20.660Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:26:58.753 00:26:58.753 08:48:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:58.753 08:48:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:00.134 Validate MD5 checksum, iteration 2 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3a8111f2af1b274cf5d4eff1976ef056 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3a8111f2af1b274cf5d4eff1976ef056 != \3\a\8\1\1\1\f\2\a\f\1\b\2\7\4\c\f\5\d\4\e\f\f\1\9\7\6\e\f\0\5\6 ]] 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:00.134 08:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:00.393 [2024-11-19 08:48:22.098603] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:00.393 [2024-11-19 08:48:22.098714] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91761 ] 00:27:00.393 [2024-11-19 08:48:22.254833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.393 [2024-11-19 08:48:22.278713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.824  [2024-11-19T08:48:24.300Z] Copying: 626/1024 [MB] (626 MBps) [2024-11-19T08:48:24.867Z] Copying: 1024/1024 [MB] (average 618 MBps) 00:27:02.960 00:27:02.960 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:02.960 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=434c29ff39e341281ad3b57474591588 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 434c29ff39e341281ad3b57474591588 != \4\3\4\c\2\9\f\f\3\9\e\3\4\1\2\8\1\a\d\3\b\5\7\4\7\4\5\9\1\5\8\8 ]] 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 91630 ]] 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 91630 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91812 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91812 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91812 ']' 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.867 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:04.867 [2024-11-19 08:48:26.513796] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:04.867 [2024-11-19 08:48:26.513990] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91812 ] 00:27:04.867 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 91630 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:04.867 [2024-11-19 08:48:26.670376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.867 [2024-11-19 08:48:26.708632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.438 [2024-11-19 08:48:27.141156] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:05.438 [2024-11-19 08:48:27.141228] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:05.438 [2024-11-19 08:48:27.285845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.285890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:05.438 [2024-11-19 08:48:27.285920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:05.438 [2024-11-19 08:48:27.285939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.438 [2024-11-19 08:48:27.285996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.286005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:05.438 [2024-11-19 08:48:27.286014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:05.438 [2024-11-19 08:48:27.286024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.438 [2024-11-19 08:48:27.286043] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:05.438 [2024-11-19 08:48:27.286241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:05.438 [2024-11-19 08:48:27.286259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.286267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:05.438 [2024-11-19 08:48:27.286275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:27:05.438 [2024-11-19 08:48:27.286282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.438 [2024-11-19 08:48:27.286555] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:05.438 [2024-11-19 08:48:27.292822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.292875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:05.438 [2024-11-19 08:48:27.292893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.280 ms 00:27:05.438 [2024-11-19 08:48:27.292901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.438 [2024-11-19 08:48:27.294255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.294362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:05.438 [2024-11-19 08:48:27.294377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:05.438 [2024-11-19 08:48:27.294388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.438 [2024-11-19 08:48:27.294704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.294737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:05.438 [2024-11-19 08:48:27.294753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:27:05.438 [2024-11-19 08:48:27.294761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.438 [2024-11-19 08:48:27.294812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.438 [2024-11-19 08:48:27.294823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:05.439 [2024-11-19 08:48:27.294831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:05.439 [2024-11-19 08:48:27.294838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.439 [2024-11-19 08:48:27.294865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.439 [2024-11-19 08:48:27.294876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:05.439 [2024-11-19 08:48:27.294887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:05.439 [2024-11-19 08:48:27.294894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.439 [2024-11-19 08:48:27.294914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:05.439 [2024-11-19 08:48:27.295685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.439 [2024-11-19 08:48:27.295705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:05.439 [2024-11-19 08:48:27.295713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.778 ms 00:27:05.439 [2024-11-19 08:48:27.295719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.439 [2024-11-19 08:48:27.295775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.439 [2024-11-19 08:48:27.295804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:05.439 [2024-11-19 08:48:27.295812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:05.439 [2024-11-19 08:48:27.295826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.439 [2024-11-19 08:48:27.295858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:05.439 [2024-11-19 08:48:27.295878] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:05.439 [2024-11-19 08:48:27.295915] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:05.439 [2024-11-19 08:48:27.295936] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:05.439 [2024-11-19 08:48:27.296025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:05.439 [2024-11-19 08:48:27.296036] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:05.439 [2024-11-19 08:48:27.296045] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:05.439 [2024-11-19 08:48:27.296063] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296071] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:05.439 [2024-11-19 08:48:27.296087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:05.439 [2024-11-19 08:48:27.296094] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:05.439 [2024-11-19 08:48:27.296100] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:05.439 [2024-11-19 08:48:27.296108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.439 [2024-11-19 08:48:27.296122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:05.439 [2024-11-19 08:48:27.296134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:27:05.439 [2024-11-19 08:48:27.296141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.439 [2024-11-19 08:48:27.296210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.439 [2024-11-19 08:48:27.296227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:05.439 [2024-11-19 08:48:27.296240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:05.439 [2024-11-19 08:48:27.296254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.439 [2024-11-19 08:48:27.296354] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:05.439 [2024-11-19 08:48:27.296364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:05.439 [2024-11-19 08:48:27.296375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:05.439 [2024-11-19 08:48:27.296409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:05.439 [2024-11-19 08:48:27.296424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:05.439 [2024-11-19 08:48:27.296431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:05.439 [2024-11-19 08:48:27.296438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:05.439 [2024-11-19 08:48:27.296451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:05.439 [2024-11-19 08:48:27.296458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:05.439 [2024-11-19 08:48:27.296480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:05.439 [2024-11-19 08:48:27.296486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:05.439 [2024-11-19 08:48:27.296499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:05.439 [2024-11-19 08:48:27.296505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:05.439 [2024-11-19 08:48:27.296519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:05.439 [2024-11-19 08:48:27.296525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:05.439 [2024-11-19 08:48:27.296538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:05.439 [2024-11-19 08:48:27.296545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:05.439 [2024-11-19 08:48:27.296557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:05.439 [2024-11-19 08:48:27.296563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:05.439 [2024-11-19 08:48:27.296577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:05.439 [2024-11-19 08:48:27.296586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:05.439 [2024-11-19 08:48:27.296598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:05.439 [2024-11-19 08:48:27.296604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:05.439 [2024-11-19 08:48:27.296625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:05.439 [2024-11-19 08:48:27.296646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:05.439 [2024-11-19 08:48:27.296666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:05.439 [2024-11-19 08:48:27.296672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296678] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:05.439 [2024-11-19 08:48:27.296686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:05.439 [2024-11-19 08:48:27.296693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.439 [2024-11-19 08:48:27.296710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:05.439 [2024-11-19 08:48:27.296734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:05.439 [2024-11-19 08:48:27.296741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:05.439 [2024-11-19 08:48:27.296748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:05.439 [2024-11-19 08:48:27.296754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:05.439 [2024-11-19 08:48:27.296761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:05.439 [2024-11-19 08:48:27.296770] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:05.439 [2024-11-19 08:48:27.296780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.439 [2024-11-19 08:48:27.296797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:05.439 [2024-11-19 08:48:27.296805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:05.439 [2024-11-19 08:48:27.296812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:05.439 [2024-11-19 08:48:27.296819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:05.439 [2024-11-19 08:48:27.296826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:05.439 [2024-11-19 08:48:27.296833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:05.439 [2024-11-19 08:48:27.296840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:05.439 [2024-11-19 08:48:27.296851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:05.439 [2024-11-19 08:48:27.296859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:05.439 [2024-11-19 08:48:27.296866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:05.440 [2024-11-19 08:48:27.296877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:05.440 [2024-11-19 08:48:27.296885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:05.440 [2024-11-19 08:48:27.296893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:05.440 [2024-11-19 08:48:27.296900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:05.440 [2024-11-19 08:48:27.296908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:05.440 [2024-11-19 08:48:27.296920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.440 [2024-11-19 08:48:27.296937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.440 [2024-11-19 08:48:27.296945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:05.440 [2024-11-19 08:48:27.296953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:05.440 [2024-11-19 08:48:27.296970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:05.440 [2024-11-19 08:48:27.296979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.296990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:05.440 [2024-11-19 08:48:27.296998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.679 ms 00:27:05.440 [2024-11-19 08:48:27.297008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.440 [2024-11-19 08:48:27.314119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.314207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:05.440 [2024-11-19 08:48:27.314257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.088 ms 00:27:05.440 [2024-11-19 08:48:27.314278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.440 [2024-11-19 08:48:27.314380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.314413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:05.440 [2024-11-19 08:48:27.314436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:05.440 [2024-11-19 08:48:27.314471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.440 [2024-11-19 08:48:27.332805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.332921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:05.440 [2024-11-19 08:48:27.332962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.281 ms 00:27:05.440 [2024-11-19 08:48:27.332983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.440 [2024-11-19 08:48:27.333053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.333093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:05.440 [2024-11-19 08:48:27.333146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:05.440 [2024-11-19 08:48:27.333173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.440 [2024-11-19 08:48:27.333314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.333381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:05.440 [2024-11-19 08:48:27.333414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:27:05.440 [2024-11-19 08:48:27.333435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.440 [2024-11-19 08:48:27.333518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.440 [2024-11-19 08:48:27.333563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:05.440 [2024-11-19 08:48:27.333593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:05.440 [2024-11-19 08:48:27.333625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.346167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.346263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:05.700 [2024-11-19 08:48:27.346311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.526 ms 00:27:05.700 [2024-11-19 08:48:27.346333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.346514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.346593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:05.700 [2024-11-19 08:48:27.346642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:05.700 [2024-11-19 08:48:27.346671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.361068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.361141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:05.700 [2024-11-19 08:48:27.361174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.326 ms 00:27:05.700 [2024-11-19 08:48:27.361212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.362401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.362476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:05.700 [2024-11-19 08:48:27.362543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:27:05.700 [2024-11-19 08:48:27.362571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.392477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.392608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:05.700 [2024-11-19 08:48:27.392647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.906 ms 00:27:05.700 [2024-11-19 08:48:27.392670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.392920] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:05.700 [2024-11-19 08:48:27.393137] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:05.700 [2024-11-19 08:48:27.393307] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:05.700 [2024-11-19 08:48:27.393466] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:05.700 [2024-11-19 08:48:27.393505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.393539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:05.700 [2024-11-19 08:48:27.393591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:27:05.700 [2024-11-19 08:48:27.393617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.700 [2024-11-19 08:48:27.393712] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:05.700 [2024-11-19 08:48:27.393771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.700 [2024-11-19 08:48:27.393797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:05.701 [2024-11-19 08:48:27.393824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:27:05.701 [2024-11-19 08:48:27.393857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.701 [2024-11-19 08:48:27.397929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.701 [2024-11-19 08:48:27.398003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:05.701 [2024-11-19 08:48:27.398043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.030 ms 00:27:05.701 [2024-11-19 08:48:27.398080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.701 [2024-11-19 08:48:27.398875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.701 [2024-11-19 08:48:27.398935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:05.701 [2024-11-19 08:48:27.398965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:05.701 [2024-11-19 08:48:27.398987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.701 [2024-11-19 08:48:27.399098] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:05.701 [2024-11-19 08:48:27.399442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.701 [2024-11-19 08:48:27.399452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:05.701 [2024-11-19 08:48:27.399467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:27:05.701 [2024-11-19 08:48:27.399475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.270 [2024-11-19 08:48:27.992910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.270 [2024-11-19 08:48:27.993079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:06.270 [2024-11-19 08:48:27.993115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 594.244 ms 00:27:06.270 [2024-11-19 08:48:27.993125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.270 [2024-11-19 08:48:27.994962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.270 [2024-11-19 08:48:27.995003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:06.270 [2024-11-19 08:48:27.995032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.154 ms 00:27:06.270 [2024-11-19 08:48:27.995041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.270 [2024-11-19 08:48:27.995526] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:06.270 [2024-11-19 08:48:27.995549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.270 [2024-11-19 08:48:27.995557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:06.270 [2024-11-19 08:48:27.995566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.478 ms 00:27:06.270 [2024-11-19 08:48:27.995573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.270 [2024-11-19 08:48:27.995620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.270 [2024-11-19 08:48:27.995636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:06.270 [2024-11-19 08:48:27.995644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:06.270 [2024-11-19 08:48:27.995652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.270 [2024-11-19 08:48:27.995698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 597.741 ms, result 0 00:27:06.270 [2024-11-19 08:48:27.995766] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:06.270 [2024-11-19 08:48:27.995885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.270 [2024-11-19 08:48:27.995896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:06.270 [2024-11-19 08:48:27.995905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.122 ms 00:27:06.270 [2024-11-19 08:48:27.995912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.588415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.588472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:06.840 [2024-11-19 08:48:28.588487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 593.305 ms 00:27:06.840 [2024-11-19 08:48:28.588494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.590271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.590366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:06.840 [2024-11-19 08:48:28.590381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:27:06.840 [2024-11-19 08:48:28.590389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.590839] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:06.840 [2024-11-19 08:48:28.590858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.590866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:06.840 [2024-11-19 08:48:28.590875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:27:06.840 [2024-11-19 08:48:28.590882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.590911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.590920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:06.840 [2024-11-19 08:48:28.590928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:06.840 [2024-11-19 08:48:28.590934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.590969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 596.376 ms, result 0 00:27:06.840 [2024-11-19 08:48:28.591019] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:06.840 [2024-11-19 08:48:28.591030] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:06.840 [2024-11-19 08:48:28.591039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.591049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:06.840 [2024-11-19 08:48:28.591058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1194.267 ms 00:27:06.840 [2024-11-19 08:48:28.591083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.591113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.591122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:06.840 [2024-11-19 08:48:28.591131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.840 [2024-11-19 08:48:28.591139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.840 [2024-11-19 08:48:28.598584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:06.840 [2024-11-19 08:48:28.598686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.840 [2024-11-19 08:48:28.598700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:06.840 [2024-11-19 08:48:28.598708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.547 ms 00:27:06.841 [2024-11-19 08:48:28.598715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.599282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.599302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:06.841 [2024-11-19 08:48:28.599324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.495 ms 00:27:06.841 [2024-11-19 08:48:28.599332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.601236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.601318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:06.841 [2024-11-19 08:48:28.601331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.890 ms 00:27:06.841 [2024-11-19 08:48:28.601339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.601417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.601425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:06.841 [2024-11-19 08:48:28.601434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:06.841 [2024-11-19 08:48:28.601441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.601540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.601549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:06.841 [2024-11-19 08:48:28.601561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:06.841 [2024-11-19 08:48:28.601567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.601587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.601595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:06.841 [2024-11-19 08:48:28.601603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:06.841 [2024-11-19 08:48:28.601614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.601644] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:06.841 [2024-11-19 08:48:28.601654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.601669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:06.841 [2024-11-19 08:48:28.601677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:06.841 [2024-11-19 08:48:28.601687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.601765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.841 [2024-11-19 08:48:28.601774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:06.841 [2024-11-19 08:48:28.601781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:27:06.841 [2024-11-19 08:48:28.601789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.841 [2024-11-19 08:48:28.603019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1319.172 ms, result 0 00:27:06.841 [2024-11-19 08:48:28.615356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.841 [2024-11-19 08:48:28.631322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:06.841 [2024-11-19 08:48:28.639433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:07.411 Validate MD5 checksum, iteration 1 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:07.411 08:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:07.411 [2024-11-19 08:48:29.126536] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:07.411 [2024-11-19 08:48:29.126655] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91850 ] 00:27:07.411 [2024-11-19 08:48:29.279068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.411 [2024-11-19 08:48:29.302974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.791  [2024-11-19T08:48:31.268Z] Copying: 628/1024 [MB] (628 MBps) [2024-11-19T08:48:32.207Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:27:10.300 00:27:10.300 08:48:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:10.300 08:48:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:11.680 Validate MD5 checksum, iteration 2 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3a8111f2af1b274cf5d4eff1976ef056 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3a8111f2af1b274cf5d4eff1976ef056 != \3\a\8\1\1\1\f\2\a\f\1\b\2\7\4\c\f\5\d\4\e\f\f\1\9\7\6\e\f\0\5\6 ]] 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:11.680 08:48:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:11.940 [2024-11-19 08:48:33.663492] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:11.940 [2024-11-19 08:48:33.663683] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91902 ] 00:27:11.940 [2024-11-19 08:48:33.821394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.199 [2024-11-19 08:48:33.845999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.581  [2024-11-19T08:48:36.057Z] Copying: 636/1024 [MB] (636 MBps) [2024-11-19T08:48:38.594Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:27:16.688 00:27:16.947 08:48:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:16.947 08:48:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:18.859 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=434c29ff39e341281ad3b57474591588 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 434c29ff39e341281ad3b57474591588 != \4\3\4\c\2\9\f\f\3\9\e\3\4\1\2\8\1\a\d\3\b\5\7\4\7\4\5\9\1\5\8\8 ]] 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91812 ]] 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91812 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91812 ']' 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91812 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91812 00:27:18.860 killing process with pid 91812 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91812' 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91812 00:27:18.860 08:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91812 00:27:18.860 [2024-11-19 08:48:40.671667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:18.860 [2024-11-19 08:48:40.677243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.677286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:18.860 [2024-11-19 08:48:40.677301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:18.860 [2024-11-19 08:48:40.677317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.677344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:18.860 [2024-11-19 08:48:40.678608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.678630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:18.860 [2024-11-19 08:48:40.678642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.252 ms 00:27:18.860 [2024-11-19 08:48:40.678651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.678881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.678892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:18.860 [2024-11-19 08:48:40.678900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:27:18.860 [2024-11-19 08:48:40.678908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.681334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.681484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:18.860 [2024-11-19 08:48:40.681533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.398 ms 00:27:18.860 [2024-11-19 08:48:40.681588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.685830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.685914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:18.860 [2024-11-19 08:48:40.685949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.118 ms 00:27:18.860 [2024-11-19 08:48:40.685978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.689567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.689680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:18.860 [2024-11-19 08:48:40.689760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.350 ms 00:27:18.860 [2024-11-19 08:48:40.689812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.692233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.692483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:18.860 [2024-11-19 08:48:40.692538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.306 ms 00:27:18.860 [2024-11-19 08:48:40.692570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.692899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.692972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:18.860 [2024-11-19 08:48:40.693005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:27:18.860 [2024-11-19 08:48:40.693035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.695530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.695766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:18.860 [2024-11-19 08:48:40.695819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.420 ms 00:27:18.860 [2024-11-19 08:48:40.695869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.697894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.697954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:18.860 [2024-11-19 08:48:40.697977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.956 ms 00:27:18.860 [2024-11-19 08:48:40.697994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.700003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.700064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:18.860 [2024-11-19 08:48:40.700086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.952 ms 00:27:18.860 [2024-11-19 08:48:40.700103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.701984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.860 [2024-11-19 08:48:40.702041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:18.860 [2024-11-19 08:48:40.702063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.757 ms 00:27:18.860 [2024-11-19 08:48:40.702080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.860 [2024-11-19 08:48:40.702139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:18.860 [2024-11-19 08:48:40.702174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:18.860 [2024-11-19 08:48:40.702198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:18.860 [2024-11-19 08:48:40.702219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:18.860 [2024-11-19 08:48:40.702239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.860 [2024-11-19 08:48:40.702530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:18.860 [2024-11-19 08:48:40.702549] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 779195a2-14ef-4d40-b764-59632bb4fea9 00:27:18.861 [2024-11-19 08:48:40.702568] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:18.861 [2024-11-19 08:48:40.702587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:18.861 [2024-11-19 08:48:40.702605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:18.861 [2024-11-19 08:48:40.702623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:18.861 [2024-11-19 08:48:40.702642] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:18.861 [2024-11-19 08:48:40.702662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:18.861 [2024-11-19 08:48:40.702681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:18.861 [2024-11-19 08:48:40.702697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:18.861 [2024-11-19 08:48:40.702743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:18.861 [2024-11-19 08:48:40.702768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.861 [2024-11-19 08:48:40.702797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:18.861 [2024-11-19 08:48:40.702817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.631 ms 00:27:18.861 [2024-11-19 08:48:40.702838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.705323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.861 [2024-11-19 08:48:40.705367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:18.861 [2024-11-19 08:48:40.705389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.433 ms 00:27:18.861 [2024-11-19 08:48:40.705407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.705628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.861 [2024-11-19 08:48:40.705653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:18.861 [2024-11-19 08:48:40.705673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.114 ms 00:27:18.861 [2024-11-19 08:48:40.705692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.713847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.713957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:18.861 [2024-11-19 08:48:40.713998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.714011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.714067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.714082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:18.861 [2024-11-19 08:48:40.714095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.714106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.714215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.714234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:18.861 [2024-11-19 08:48:40.714248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.714261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.714290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.714310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:18.861 [2024-11-19 08:48:40.714322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.714334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.730131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.730185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:18.861 [2024-11-19 08:48:40.730197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.730206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.738879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.738921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:18.861 [2024-11-19 08:48:40.738944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.738952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.739035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:18.861 [2024-11-19 08:48:40.739042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.739050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.739097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:18.861 [2024-11-19 08:48:40.739108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.739116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.739197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:18.861 [2024-11-19 08:48:40.739204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.739211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.739260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:18.861 [2024-11-19 08:48:40.739271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.739286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.739339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:18.861 [2024-11-19 08:48:40.739345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.739353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:18.861 [2024-11-19 08:48:40.739409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:18.861 [2024-11-19 08:48:40.739420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:18.861 [2024-11-19 08:48:40.739427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.861 [2024-11-19 08:48:40.739554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 62.405 ms, result 0 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:19.122 Remove shared memory files 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid91630 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:19.122 08:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:19.122 ************************************ 00:27:19.122 END TEST ftl_upgrade_shutdown 00:27:19.122 ************************************ 00:27:19.122 00:27:19.122 real 1m11.143s 00:27:19.122 user 1m30.722s 00:27:19.122 sys 0m22.824s 00:27:19.122 08:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.122 08:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:19.382 08:48:41 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:19.382 08:48:41 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:19.382 08:48:41 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:19.382 08:48:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.382 08:48:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:19.382 ************************************ 00:27:19.382 START TEST ftl_restore_fast 00:27:19.382 ************************************ 00:27:19.382 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:19.382 * Looking for test storage... 00:27:19.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:19.382 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:19.382 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:27:19.382 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.643 --rc genhtml_branch_coverage=1 00:27:19.643 --rc genhtml_function_coverage=1 00:27:19.643 --rc genhtml_legend=1 00:27:19.643 --rc geninfo_all_blocks=1 00:27:19.643 --rc geninfo_unexecuted_blocks=1 00:27:19.643 00:27:19.643 ' 00:27:19.643 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.643 --rc genhtml_branch_coverage=1 00:27:19.643 --rc genhtml_function_coverage=1 00:27:19.643 --rc genhtml_legend=1 00:27:19.643 --rc geninfo_all_blocks=1 00:27:19.643 --rc geninfo_unexecuted_blocks=1 00:27:19.643 00:27:19.643 ' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.644 --rc genhtml_branch_coverage=1 00:27:19.644 --rc genhtml_function_coverage=1 00:27:19.644 --rc genhtml_legend=1 00:27:19.644 --rc geninfo_all_blocks=1 00:27:19.644 --rc geninfo_unexecuted_blocks=1 00:27:19.644 00:27:19.644 ' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.644 --rc genhtml_branch_coverage=1 00:27:19.644 --rc genhtml_function_coverage=1 00:27:19.644 --rc genhtml_legend=1 00:27:19.644 --rc geninfo_all_blocks=1 00:27:19.644 --rc geninfo_unexecuted_blocks=1 00:27:19.644 00:27:19.644 ' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hfbi04BWEq 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92058 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92058 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92058 ']' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.644 08:48:41 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:19.644 [2024-11-19 08:48:41.476111] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:19.644 [2024-11-19 08:48:41.476242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92058 ] 00:27:19.904 [2024-11-19 08:48:41.631941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.904 [2024-11-19 08:48:41.657711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:20.473 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:20.474 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:20.734 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:20.993 { 00:27:20.993 "name": "nvme0n1", 00:27:20.993 "aliases": [ 00:27:20.993 "bb9cfe3e-2032-4b42-91ad-28b62771ba31" 00:27:20.993 ], 00:27:20.993 "product_name": "NVMe disk", 00:27:20.993 "block_size": 4096, 00:27:20.993 "num_blocks": 1310720, 00:27:20.993 "uuid": "bb9cfe3e-2032-4b42-91ad-28b62771ba31", 00:27:20.993 "numa_id": -1, 00:27:20.993 "assigned_rate_limits": { 00:27:20.993 "rw_ios_per_sec": 0, 00:27:20.993 "rw_mbytes_per_sec": 0, 00:27:20.993 "r_mbytes_per_sec": 0, 00:27:20.993 "w_mbytes_per_sec": 0 00:27:20.993 }, 00:27:20.993 "claimed": true, 00:27:20.993 "claim_type": "read_many_write_one", 00:27:20.993 "zoned": false, 00:27:20.993 "supported_io_types": { 00:27:20.993 "read": true, 00:27:20.993 "write": true, 00:27:20.993 "unmap": true, 00:27:20.993 "flush": true, 00:27:20.993 "reset": true, 00:27:20.993 "nvme_admin": true, 00:27:20.993 "nvme_io": true, 00:27:20.993 "nvme_io_md": false, 00:27:20.993 "write_zeroes": true, 00:27:20.993 "zcopy": false, 00:27:20.993 "get_zone_info": false, 00:27:20.993 "zone_management": false, 00:27:20.993 "zone_append": false, 00:27:20.993 "compare": true, 00:27:20.993 "compare_and_write": false, 00:27:20.993 "abort": true, 00:27:20.993 "seek_hole": false, 00:27:20.993 "seek_data": false, 00:27:20.993 "copy": true, 00:27:20.993 "nvme_iov_md": false 00:27:20.993 }, 00:27:20.993 "driver_specific": { 00:27:20.993 "nvme": [ 00:27:20.993 { 00:27:20.993 "pci_address": "0000:00:11.0", 00:27:20.993 "trid": { 00:27:20.993 "trtype": "PCIe", 00:27:20.993 "traddr": "0000:00:11.0" 00:27:20.993 }, 00:27:20.993 "ctrlr_data": { 00:27:20.993 "cntlid": 0, 00:27:20.993 "vendor_id": "0x1b36", 00:27:20.993 "model_number": "QEMU NVMe Ctrl", 00:27:20.993 "serial_number": "12341", 00:27:20.993 "firmware_revision": "8.0.0", 00:27:20.993 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:20.993 "oacs": { 00:27:20.993 "security": 0, 00:27:20.993 "format": 1, 00:27:20.993 "firmware": 0, 00:27:20.993 "ns_manage": 1 00:27:20.993 }, 00:27:20.993 "multi_ctrlr": false, 00:27:20.993 "ana_reporting": false 00:27:20.993 }, 00:27:20.993 "vs": { 00:27:20.993 "nvme_version": "1.4" 00:27:20.993 }, 00:27:20.993 "ns_data": { 00:27:20.993 "id": 1, 00:27:20.993 "can_share": false 00:27:20.993 } 00:27:20.993 } 00:27:20.993 ], 00:27:20.993 "mp_policy": "active_passive" 00:27:20.993 } 00:27:20.993 } 00:27:20.993 ]' 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:20.993 08:48:42 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:21.264 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=21d6dd6b-c3ac-4682-9dc8-624a952c8ee9 00:27:21.264 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:21.264 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21d6dd6b-c3ac-4682-9dc8-624a952c8ee9 00:27:21.553 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=4f03fc7a-4d98-4e95-ac8a-a29f09dca454 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4f03fc7a-4d98-4e95-ac8a-a29f09dca454 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:21.813 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:22.074 { 00:27:22.074 "name": "9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6", 00:27:22.074 "aliases": [ 00:27:22.074 "lvs/nvme0n1p0" 00:27:22.074 ], 00:27:22.074 "product_name": "Logical Volume", 00:27:22.074 "block_size": 4096, 00:27:22.074 "num_blocks": 26476544, 00:27:22.074 "uuid": "9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6", 00:27:22.074 "assigned_rate_limits": { 00:27:22.074 "rw_ios_per_sec": 0, 00:27:22.074 "rw_mbytes_per_sec": 0, 00:27:22.074 "r_mbytes_per_sec": 0, 00:27:22.074 "w_mbytes_per_sec": 0 00:27:22.074 }, 00:27:22.074 "claimed": false, 00:27:22.074 "zoned": false, 00:27:22.074 "supported_io_types": { 00:27:22.074 "read": true, 00:27:22.074 "write": true, 00:27:22.074 "unmap": true, 00:27:22.074 "flush": false, 00:27:22.074 "reset": true, 00:27:22.074 "nvme_admin": false, 00:27:22.074 "nvme_io": false, 00:27:22.074 "nvme_io_md": false, 00:27:22.074 "write_zeroes": true, 00:27:22.074 "zcopy": false, 00:27:22.074 "get_zone_info": false, 00:27:22.074 "zone_management": false, 00:27:22.074 "zone_append": false, 00:27:22.074 "compare": false, 00:27:22.074 "compare_and_write": false, 00:27:22.074 "abort": false, 00:27:22.074 "seek_hole": true, 00:27:22.074 "seek_data": true, 00:27:22.074 "copy": false, 00:27:22.074 "nvme_iov_md": false 00:27:22.074 }, 00:27:22.074 "driver_specific": { 00:27:22.074 "lvol": { 00:27:22.074 "lvol_store_uuid": "4f03fc7a-4d98-4e95-ac8a-a29f09dca454", 00:27:22.074 "base_bdev": "nvme0n1", 00:27:22.074 "thin_provision": true, 00:27:22.074 "num_allocated_clusters": 0, 00:27:22.074 "snapshot": false, 00:27:22.074 "clone": false, 00:27:22.074 "esnap_clone": false 00:27:22.074 } 00:27:22.074 } 00:27:22.074 } 00:27:22.074 ]' 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:22.074 08:48:43 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:22.334 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:22.595 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:22.595 { 00:27:22.595 "name": "9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6", 00:27:22.595 "aliases": [ 00:27:22.595 "lvs/nvme0n1p0" 00:27:22.595 ], 00:27:22.595 "product_name": "Logical Volume", 00:27:22.595 "block_size": 4096, 00:27:22.595 "num_blocks": 26476544, 00:27:22.595 "uuid": "9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6", 00:27:22.595 "assigned_rate_limits": { 00:27:22.595 "rw_ios_per_sec": 0, 00:27:22.595 "rw_mbytes_per_sec": 0, 00:27:22.595 "r_mbytes_per_sec": 0, 00:27:22.595 "w_mbytes_per_sec": 0 00:27:22.595 }, 00:27:22.595 "claimed": false, 00:27:22.595 "zoned": false, 00:27:22.595 "supported_io_types": { 00:27:22.595 "read": true, 00:27:22.595 "write": true, 00:27:22.595 "unmap": true, 00:27:22.595 "flush": false, 00:27:22.595 "reset": true, 00:27:22.595 "nvme_admin": false, 00:27:22.595 "nvme_io": false, 00:27:22.595 "nvme_io_md": false, 00:27:22.595 "write_zeroes": true, 00:27:22.595 "zcopy": false, 00:27:22.595 "get_zone_info": false, 00:27:22.595 "zone_management": false, 00:27:22.595 "zone_append": false, 00:27:22.595 "compare": false, 00:27:22.595 "compare_and_write": false, 00:27:22.595 "abort": false, 00:27:22.595 "seek_hole": true, 00:27:22.595 "seek_data": true, 00:27:22.595 "copy": false, 00:27:22.595 "nvme_iov_md": false 00:27:22.595 }, 00:27:22.595 "driver_specific": { 00:27:22.595 "lvol": { 00:27:22.595 "lvol_store_uuid": "4f03fc7a-4d98-4e95-ac8a-a29f09dca454", 00:27:22.595 "base_bdev": "nvme0n1", 00:27:22.595 "thin_provision": true, 00:27:22.595 "num_allocated_clusters": 0, 00:27:22.595 "snapshot": false, 00:27:22.595 "clone": false, 00:27:22.595 "esnap_clone": false 00:27:22.595 } 00:27:22.595 } 00:27:22.595 } 00:27:22.595 ]' 00:27:22.595 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:22.595 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:22.595 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:22.854 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 00:27:23.115 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:23.115 { 00:27:23.115 "name": "9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6", 00:27:23.115 "aliases": [ 00:27:23.115 "lvs/nvme0n1p0" 00:27:23.115 ], 00:27:23.115 "product_name": "Logical Volume", 00:27:23.115 "block_size": 4096, 00:27:23.115 "num_blocks": 26476544, 00:27:23.115 "uuid": "9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6", 00:27:23.115 "assigned_rate_limits": { 00:27:23.115 "rw_ios_per_sec": 0, 00:27:23.115 "rw_mbytes_per_sec": 0, 00:27:23.115 "r_mbytes_per_sec": 0, 00:27:23.115 "w_mbytes_per_sec": 0 00:27:23.115 }, 00:27:23.115 "claimed": false, 00:27:23.115 "zoned": false, 00:27:23.115 "supported_io_types": { 00:27:23.115 "read": true, 00:27:23.115 "write": true, 00:27:23.115 "unmap": true, 00:27:23.115 "flush": false, 00:27:23.115 "reset": true, 00:27:23.115 "nvme_admin": false, 00:27:23.115 "nvme_io": false, 00:27:23.115 "nvme_io_md": false, 00:27:23.115 "write_zeroes": true, 00:27:23.115 "zcopy": false, 00:27:23.115 "get_zone_info": false, 00:27:23.115 "zone_management": false, 00:27:23.115 "zone_append": false, 00:27:23.115 "compare": false, 00:27:23.115 "compare_and_write": false, 00:27:23.115 "abort": false, 00:27:23.115 "seek_hole": true, 00:27:23.115 "seek_data": true, 00:27:23.115 "copy": false, 00:27:23.115 "nvme_iov_md": false 00:27:23.115 }, 00:27:23.115 "driver_specific": { 00:27:23.115 "lvol": { 00:27:23.115 "lvol_store_uuid": "4f03fc7a-4d98-4e95-ac8a-a29f09dca454", 00:27:23.115 "base_bdev": "nvme0n1", 00:27:23.115 "thin_provision": true, 00:27:23.115 "num_allocated_clusters": 0, 00:27:23.115 "snapshot": false, 00:27:23.115 "clone": false, 00:27:23.115 "esnap_clone": false 00:27:23.115 } 00:27:23.115 } 00:27:23.115 } 00:27:23.115 ]' 00:27:23.115 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:23.115 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:23.115 08:48:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 --l2p_dram_limit 10' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:23.115 08:48:45 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9d7fecf9-4b1d-42e6-a7d2-05ff9658d5f6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:23.376 [2024-11-19 08:48:45.185167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.376 [2024-11-19 08:48:45.185258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:23.376 [2024-11-19 08:48:45.185274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:23.376 [2024-11-19 08:48:45.185283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.185356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.185369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.377 [2024-11-19 08:48:45.185378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:23.377 [2024-11-19 08:48:45.185391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.185409] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:23.377 [2024-11-19 08:48:45.185653] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:23.377 [2024-11-19 08:48:45.185673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.185683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.377 [2024-11-19 08:48:45.185691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:27:23.377 [2024-11-19 08:48:45.185700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.185728] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cab91ae8-5b37-48ec-8dc8-334c406b4da9 00:27:23.377 [2024-11-19 08:48:45.187126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.187155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:23.377 [2024-11-19 08:48:45.187170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:23.377 [2024-11-19 08:48:45.187177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.194496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.194530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.377 [2024-11-19 08:48:45.194541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.265 ms 00:27:23.377 [2024-11-19 08:48:45.194549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.194628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.194643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.377 [2024-11-19 08:48:45.194653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:23.377 [2024-11-19 08:48:45.194660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.194742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.194753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:23.377 [2024-11-19 08:48:45.194762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:23.377 [2024-11-19 08:48:45.194769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.194794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:23.377 [2024-11-19 08:48:45.196477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.196510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.377 [2024-11-19 08:48:45.196519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:27:23.377 [2024-11-19 08:48:45.196528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.196558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.196569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:23.377 [2024-11-19 08:48:45.196576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:23.377 [2024-11-19 08:48:45.196587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.196615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:23.377 [2024-11-19 08:48:45.196749] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:23.377 [2024-11-19 08:48:45.196779] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:23.377 [2024-11-19 08:48:45.196792] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:23.377 [2024-11-19 08:48:45.196804] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:23.377 [2024-11-19 08:48:45.196814] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:23.377 [2024-11-19 08:48:45.196825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:23.377 [2024-11-19 08:48:45.196835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:23.377 [2024-11-19 08:48:45.196845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:23.377 [2024-11-19 08:48:45.196858] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:23.377 [2024-11-19 08:48:45.196866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.196875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:23.377 [2024-11-19 08:48:45.196883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:27:23.377 [2024-11-19 08:48:45.196893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.196969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-19 08:48:45.196983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:23.377 [2024-11-19 08:48:45.196991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:23.377 [2024-11-19 08:48:45.197008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-19 08:48:45.197091] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:23.377 [2024-11-19 08:48:45.197104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:23.377 [2024-11-19 08:48:45.197112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:23.377 [2024-11-19 08:48:45.197138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:23.377 [2024-11-19 08:48:45.197162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:23.377 [2024-11-19 08:48:45.197180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:23.377 [2024-11-19 08:48:45.197190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:23.377 [2024-11-19 08:48:45.197196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:23.377 [2024-11-19 08:48:45.197206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:23.377 [2024-11-19 08:48:45.197214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:23.377 [2024-11-19 08:48:45.197223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:23.377 [2024-11-19 08:48:45.197238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:23.377 [2024-11-19 08:48:45.197260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:23.377 [2024-11-19 08:48:45.197283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:23.377 [2024-11-19 08:48:45.197304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:23.377 [2024-11-19 08:48:45.197328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:23.377 [2024-11-19 08:48:45.197349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:23.377 [2024-11-19 08:48:45.197363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:23.377 [2024-11-19 08:48:45.197372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:23.377 [2024-11-19 08:48:45.197378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:23.377 [2024-11-19 08:48:45.197386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:23.377 [2024-11-19 08:48:45.197392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:23.377 [2024-11-19 08:48:45.197400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:23.377 [2024-11-19 08:48:45.197414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:23.377 [2024-11-19 08:48:45.197422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197430] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:23.377 [2024-11-19 08:48:45.197439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:23.377 [2024-11-19 08:48:45.197449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:23.377 [2024-11-19 08:48:45.197465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.377 [2024-11-19 08:48:45.197476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:23.377 [2024-11-19 08:48:45.197483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:23.377 [2024-11-19 08:48:45.197491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:23.378 [2024-11-19 08:48:45.197498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:23.378 [2024-11-19 08:48:45.197507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:23.378 [2024-11-19 08:48:45.197514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:23.378 [2024-11-19 08:48:45.197527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:23.378 [2024-11-19 08:48:45.197536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:23.378 [2024-11-19 08:48:45.197557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:23.378 [2024-11-19 08:48:45.197568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:23.378 [2024-11-19 08:48:45.197576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:23.378 [2024-11-19 08:48:45.197585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:23.378 [2024-11-19 08:48:45.197593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:23.378 [2024-11-19 08:48:45.197603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:23.378 [2024-11-19 08:48:45.197610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:23.378 [2024-11-19 08:48:45.197619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:23.378 [2024-11-19 08:48:45.197627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:23.378 [2024-11-19 08:48:45.197667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:23.378 [2024-11-19 08:48:45.197674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:23.378 [2024-11-19 08:48:45.197692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:23.378 [2024-11-19 08:48:45.197701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:23.378 [2024-11-19 08:48:45.197708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:23.378 [2024-11-19 08:48:45.197728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.378 [2024-11-19 08:48:45.197743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:23.378 [2024-11-19 08:48:45.197756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:27:23.378 [2024-11-19 08:48:45.197764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-19 08:48:45.197805] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:23.378 [2024-11-19 08:48:45.197815] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:27.581 [2024-11-19 08:48:48.843120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.843271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:27.581 [2024-11-19 08:48:48.843291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3652.338 ms 00:27:27.581 [2024-11-19 08:48:48.843299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.854438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.854565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:27.581 [2024-11-19 08:48:48.854584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.055 ms 00:27:27.581 [2024-11-19 08:48:48.854592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.854747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.854761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:27.581 [2024-11-19 08:48:48.854771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:27.581 [2024-11-19 08:48:48.854779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.865155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.865278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:27.581 [2024-11-19 08:48:48.865296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.354 ms 00:27:27.581 [2024-11-19 08:48:48.865304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.865345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.865354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:27.581 [2024-11-19 08:48:48.865364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:27.581 [2024-11-19 08:48:48.865373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.865854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.865867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:27.581 [2024-11-19 08:48:48.865879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:27:27.581 [2024-11-19 08:48:48.865888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.865988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.866004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:27.581 [2024-11-19 08:48:48.866014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:27.581 [2024-11-19 08:48:48.866021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.581 [2024-11-19 08:48:48.872912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.581 [2024-11-19 08:48:48.872960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:27.581 [2024-11-19 08:48:48.872972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:27:27.582 [2024-11-19 08:48:48.872980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:48.880154] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:27.582 [2024-11-19 08:48:48.883316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:48.883345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:27.582 [2024-11-19 08:48:48.883355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.292 ms 00:27:27.582 [2024-11-19 08:48:48.883364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:48.970141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:48.970204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:27.582 [2024-11-19 08:48:48.970219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.917 ms 00:27:27.582 [2024-11-19 08:48:48.970239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:48.970421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:48.970434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:27.582 [2024-11-19 08:48:48.970452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:27:27.582 [2024-11-19 08:48:48.970461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:48.974204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:48.974326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:27.582 [2024-11-19 08:48:48.974339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:27:27.582 [2024-11-19 08:48:48.974352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:48.977260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:48.977297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:27.582 [2024-11-19 08:48:48.977307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:27:27.582 [2024-11-19 08:48:48.977315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:48.977569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:48.977584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:27.582 [2024-11-19 08:48:48.977593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:27:27.582 [2024-11-19 08:48:48.977618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.015587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.015685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:27.582 [2024-11-19 08:48:49.015727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.008 ms 00:27:27.582 [2024-11-19 08:48:49.015757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.020255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.020337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:27.582 [2024-11-19 08:48:49.020366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:27:27.582 [2024-11-19 08:48:49.020388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.023753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.023855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:27.582 [2024-11-19 08:48:49.023884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:27:27.582 [2024-11-19 08:48:49.023906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.027349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.027428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:27.582 [2024-11-19 08:48:49.027463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:27:27.582 [2024-11-19 08:48:49.027488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.027558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.027601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:27.582 [2024-11-19 08:48:49.027631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:27.582 [2024-11-19 08:48:49.027674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.027798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.027813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:27.582 [2024-11-19 08:48:49.027822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:27:27.582 [2024-11-19 08:48:49.027832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.028909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3850.722 ms, result 0 00:27:27.582 { 00:27:27.582 "name": "ftl0", 00:27:27.582 "uuid": "cab91ae8-5b37-48ec-8dc8-334c406b4da9" 00:27:27.582 } 00:27:27.582 08:48:49 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:27.582 08:48:49 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:27.582 08:48:49 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:27.582 08:48:49 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:27.582 [2024-11-19 08:48:49.480758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.480793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:27.582 [2024-11-19 08:48:49.480805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:27.582 [2024-11-19 08:48:49.480815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.480837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:27.582 [2024-11-19 08:48:49.481530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.481550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:27.582 [2024-11-19 08:48:49.481559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:27:27.582 [2024-11-19 08:48:49.481569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.481804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.481818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:27.582 [2024-11-19 08:48:49.481826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:27:27.582 [2024-11-19 08:48:49.481835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.582 [2024-11-19 08:48:49.484212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.582 [2024-11-19 08:48:49.484235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:27.582 [2024-11-19 08:48:49.484244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.364 ms 00:27:27.582 [2024-11-19 08:48:49.484252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.489053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.489100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:27.844 [2024-11-19 08:48:49.489109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.794 ms 00:27:27.844 [2024-11-19 08:48:49.489118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.490490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.490535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:27.844 [2024-11-19 08:48:49.490546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:27:27.844 [2024-11-19 08:48:49.490555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.495197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.495239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:27.844 [2024-11-19 08:48:49.495250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.623 ms 00:27:27.844 [2024-11-19 08:48:49.495262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.495364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.495376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:27.844 [2024-11-19 08:48:49.495384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:27.844 [2024-11-19 08:48:49.495396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.497233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.497287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:27.844 [2024-11-19 08:48:49.497297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:27:27.844 [2024-11-19 08:48:49.497306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.498713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.498770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:27.844 [2024-11-19 08:48:49.498779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:27:27.844 [2024-11-19 08:48:49.498787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.499867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.499945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:27.844 [2024-11-19 08:48:49.499966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:27:27.844 [2024-11-19 08:48:49.499975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.501042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.844 [2024-11-19 08:48:49.501079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:27.844 [2024-11-19 08:48:49.501088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:27:27.844 [2024-11-19 08:48:49.501097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.844 [2024-11-19 08:48:49.501124] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:27.844 [2024-11-19 08:48:49.501144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:27.844 [2024-11-19 08:48:49.501154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:27.844 [2024-11-19 08:48:49.501164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:27.845 [2024-11-19 08:48:49.501994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:27.846 [2024-11-19 08:48:49.502093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:27.846 [2024-11-19 08:48:49.502101] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cab91ae8-5b37-48ec-8dc8-334c406b4da9 00:27:27.846 [2024-11-19 08:48:49.502122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:27.846 [2024-11-19 08:48:49.502129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:27.846 [2024-11-19 08:48:49.502140] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:27.846 [2024-11-19 08:48:49.502148] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:27.846 [2024-11-19 08:48:49.502157] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:27.846 [2024-11-19 08:48:49.502165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:27.846 [2024-11-19 08:48:49.502177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:27.846 [2024-11-19 08:48:49.502184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:27.846 [2024-11-19 08:48:49.502192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:27.846 [2024-11-19 08:48:49.502199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.846 [2024-11-19 08:48:49.502209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:27.846 [2024-11-19 08:48:49.502218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:27:27.846 [2024-11-19 08:48:49.502227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.503959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.846 [2024-11-19 08:48:49.503983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:27.846 [2024-11-19 08:48:49.503991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:27:27.846 [2024-11-19 08:48:49.504000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.504120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.846 [2024-11-19 08:48:49.504133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:27.846 [2024-11-19 08:48:49.504148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:27.846 [2024-11-19 08:48:49.504157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.510566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.510602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:27.846 [2024-11-19 08:48:49.510612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.510624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.510672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.510683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:27.846 [2024-11-19 08:48:49.510691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.510708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.510786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.510803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:27.846 [2024-11-19 08:48:49.510811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.510821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.510841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.510851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:27.846 [2024-11-19 08:48:49.510858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.510867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.524466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.524529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:27.846 [2024-11-19 08:48:49.524541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.524550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:27.846 [2024-11-19 08:48:49.533297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:27.846 [2024-11-19 08:48:49.533397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:27.846 [2024-11-19 08:48:49.533459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:27.846 [2024-11-19 08:48:49.533564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:27.846 [2024-11-19 08:48:49.533629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:27.846 [2024-11-19 08:48:49.533695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.846 [2024-11-19 08:48:49.533793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:27.846 [2024-11-19 08:48:49.533801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.846 [2024-11-19 08:48:49.533810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.846 [2024-11-19 08:48:49.533936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.242 ms, result 0 00:27:27.846 true 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92058 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92058 ']' 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92058 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92058 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:27.846 killing process with pid 92058 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92058' 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92058 00:27:27.846 08:48:49 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92058 00:27:33.131 08:48:54 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:36.427 262144+0 records in 00:27:36.427 262144+0 records out 00:27:36.427 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.23536 s, 332 MB/s 00:27:36.427 08:48:57 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:37.810 08:48:59 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.810 [2024-11-19 08:48:59.527811] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:27:37.810 [2024-11-19 08:48:59.527922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92295 ] 00:27:37.810 [2024-11-19 08:48:59.682405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.810 [2024-11-19 08:48:59.710164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.070 [2024-11-19 08:48:59.813137] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.070 [2024-11-19 08:48:59.813214] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.070 [2024-11-19 08:48:59.965751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.070 [2024-11-19 08:48:59.965798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:38.070 [2024-11-19 08:48:59.965819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:38.070 [2024-11-19 08:48:59.965827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.070 [2024-11-19 08:48:59.965871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.070 [2024-11-19 08:48:59.965888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.070 [2024-11-19 08:48:59.965896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:38.070 [2024-11-19 08:48:59.965903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.070 [2024-11-19 08:48:59.965924] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:38.070 [2024-11-19 08:48:59.966141] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:38.070 [2024-11-19 08:48:59.966168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.070 [2024-11-19 08:48:59.966182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.070 [2024-11-19 08:48:59.966191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:27:38.070 [2024-11-19 08:48:59.966200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.070 [2024-11-19 08:48:59.967676] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:38.070 [2024-11-19 08:48:59.970064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.070 [2024-11-19 08:48:59.970101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:38.070 [2024-11-19 08:48:59.970112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:27:38.070 [2024-11-19 08:48:59.970120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.070 [2024-11-19 08:48:59.970187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.070 [2024-11-19 08:48:59.970199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:38.070 [2024-11-19 08:48:59.970207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:38.070 [2024-11-19 08:48:59.970223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.332 [2024-11-19 08:48:59.977103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.332 [2024-11-19 08:48:59.977221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.332 [2024-11-19 08:48:59.977234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:27:38.332 [2024-11-19 08:48:59.977263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.332 [2024-11-19 08:48:59.977350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.332 [2024-11-19 08:48:59.977361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.332 [2024-11-19 08:48:59.977369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:38.332 [2024-11-19 08:48:59.977375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.332 [2024-11-19 08:48:59.977426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.332 [2024-11-19 08:48:59.977436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:38.332 [2024-11-19 08:48:59.977444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:38.332 [2024-11-19 08:48:59.977450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.332 [2024-11-19 08:48:59.977485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:38.332 [2024-11-19 08:48:59.979144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.332 [2024-11-19 08:48:59.979173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.332 [2024-11-19 08:48:59.979182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:27:38.332 [2024-11-19 08:48:59.979189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.332 [2024-11-19 08:48:59.979216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.332 [2024-11-19 08:48:59.979224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:38.332 [2024-11-19 08:48:59.979232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:38.332 [2024-11-19 08:48:59.979247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.332 [2024-11-19 08:48:59.979289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:38.332 [2024-11-19 08:48:59.979313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:38.332 [2024-11-19 08:48:59.979350] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:38.332 [2024-11-19 08:48:59.979366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:38.333 [2024-11-19 08:48:59.979443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:38.333 [2024-11-19 08:48:59.979453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:38.333 [2024-11-19 08:48:59.979462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:38.333 [2024-11-19 08:48:59.979475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979502] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:38.333 [2024-11-19 08:48:59.979510] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:38.333 [2024-11-19 08:48:59.979516] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:38.333 [2024-11-19 08:48:59.979523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:38.333 [2024-11-19 08:48:59.979532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.333 [2024-11-19 08:48:59.979540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:38.333 [2024-11-19 08:48:59.979547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:27:38.333 [2024-11-19 08:48:59.979573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.333 [2024-11-19 08:48:59.979639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.333 [2024-11-19 08:48:59.979657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:38.333 [2024-11-19 08:48:59.979665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:38.333 [2024-11-19 08:48:59.979678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.333 [2024-11-19 08:48:59.979798] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:38.333 [2024-11-19 08:48:59.979812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:38.333 [2024-11-19 08:48:59.979820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:38.333 [2024-11-19 08:48:59.979842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:38.333 [2024-11-19 08:48:59.979865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.333 [2024-11-19 08:48:59.979881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:38.333 [2024-11-19 08:48:59.979892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:38.333 [2024-11-19 08:48:59.979899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.333 [2024-11-19 08:48:59.979905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:38.333 [2024-11-19 08:48:59.979911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:38.333 [2024-11-19 08:48:59.979917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:38.333 [2024-11-19 08:48:59.979931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:38.333 [2024-11-19 08:48:59.979950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:38.333 [2024-11-19 08:48:59.979970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.333 [2024-11-19 08:48:59.979982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:38.333 [2024-11-19 08:48:59.979988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:38.333 [2024-11-19 08:48:59.979998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.333 [2024-11-19 08:48:59.980005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:38.333 [2024-11-19 08:48:59.980011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:38.333 [2024-11-19 08:48:59.980018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.333 [2024-11-19 08:48:59.980024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:38.333 [2024-11-19 08:48:59.980029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:38.333 [2024-11-19 08:48:59.980036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.333 [2024-11-19 08:48:59.980042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:38.333 [2024-11-19 08:48:59.980049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:38.333 [2024-11-19 08:48:59.980055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.333 [2024-11-19 08:48:59.980061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:38.333 [2024-11-19 08:48:59.980068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:38.333 [2024-11-19 08:48:59.980075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.980081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:38.333 [2024-11-19 08:48:59.980087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:38.333 [2024-11-19 08:48:59.980093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.980103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:38.333 [2024-11-19 08:48:59.980112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:38.333 [2024-11-19 08:48:59.980120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.333 [2024-11-19 08:48:59.980127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.333 [2024-11-19 08:48:59.980134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:38.333 [2024-11-19 08:48:59.980140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:38.333 [2024-11-19 08:48:59.980147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:38.333 [2024-11-19 08:48:59.980154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:38.333 [2024-11-19 08:48:59.980160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:38.333 [2024-11-19 08:48:59.980166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:38.333 [2024-11-19 08:48:59.980174] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:38.333 [2024-11-19 08:48:59.980182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:38.333 [2024-11-19 08:48:59.980198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:38.333 [2024-11-19 08:48:59.980207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:38.333 [2024-11-19 08:48:59.980213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:38.333 [2024-11-19 08:48:59.980222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:38.333 [2024-11-19 08:48:59.980230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:38.333 [2024-11-19 08:48:59.980236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:38.333 [2024-11-19 08:48:59.980243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:38.333 [2024-11-19 08:48:59.980250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:38.333 [2024-11-19 08:48:59.980257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:38.333 [2024-11-19 08:48:59.980303] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:38.333 [2024-11-19 08:48:59.980310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.333 [2024-11-19 08:48:59.980326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:38.333 [2024-11-19 08:48:59.980334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:38.333 [2024-11-19 08:48:59.980341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:38.333 [2024-11-19 08:48:59.980352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.333 [2024-11-19 08:48:59.980360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:38.333 [2024-11-19 08:48:59.980367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:27:38.333 [2024-11-19 08:48:59.980376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.333 [2024-11-19 08:48:59.992466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:48:59.992510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.334 [2024-11-19 08:48:59.992521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.068 ms 00:27:38.334 [2024-11-19 08:48:59.992529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:48:59.992600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:48:59.992609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.334 [2024-11-19 08:48:59.992616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:38.334 [2024-11-19 08:48:59.992639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.022567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.022657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.334 [2024-11-19 08:49:00.022694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.923 ms 00:27:38.334 [2024-11-19 08:49:00.022754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.022868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.022897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.334 [2024-11-19 08:49:00.022923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:38.334 [2024-11-19 08:49:00.022976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.023768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.023829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.334 [2024-11-19 08:49:00.023857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:27:38.334 [2024-11-19 08:49:00.023881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.024187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.024224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.334 [2024-11-19 08:49:00.024250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:27:38.334 [2024-11-19 08:49:00.024272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.034647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.034845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.334 [2024-11-19 08:49:00.034895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.342 ms 00:27:38.334 [2024-11-19 08:49:00.034914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.038424] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:38.334 [2024-11-19 08:49:00.038571] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:38.334 [2024-11-19 08:49:00.038599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.038617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:38.334 [2024-11-19 08:49:00.038635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.498 ms 00:27:38.334 [2024-11-19 08:49:00.038650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.056254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.056301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:38.334 [2024-11-19 08:49:00.056315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.533 ms 00:27:38.334 [2024-11-19 08:49:00.056323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.057968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.057999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:38.334 [2024-11-19 08:49:00.058010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:27:38.334 [2024-11-19 08:49:00.058017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.059363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.059399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:38.334 [2024-11-19 08:49:00.059409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:27:38.334 [2024-11-19 08:49:00.059416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.059702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.059736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:38.334 [2024-11-19 08:49:00.059747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:27:38.334 [2024-11-19 08:49:00.059754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.078285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.078412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:38.334 [2024-11-19 08:49:00.078434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.549 ms 00:27:38.334 [2024-11-19 08:49:00.078442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.084124] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:38.334 [2024-11-19 08:49:00.086765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.086795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:38.334 [2024-11-19 08:49:00.086814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.288 ms 00:27:38.334 [2024-11-19 08:49:00.086827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.086883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.086892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:38.334 [2024-11-19 08:49:00.086901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.334 [2024-11-19 08:49:00.086918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.086997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.087017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:38.334 [2024-11-19 08:49:00.087026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:38.334 [2024-11-19 08:49:00.087039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.087066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.087076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:38.334 [2024-11-19 08:49:00.087083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:38.334 [2024-11-19 08:49:00.087089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.087121] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:38.334 [2024-11-19 08:49:00.087132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.087139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:38.334 [2024-11-19 08:49:00.087152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:38.334 [2024-11-19 08:49:00.087160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.090630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.090672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:38.334 [2024-11-19 08:49:00.090683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:27:38.334 [2024-11-19 08:49:00.090698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.090783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.334 [2024-11-19 08:49:00.090795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:38.334 [2024-11-19 08:49:00.090804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:38.334 [2024-11-19 08:49:00.090811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.334 [2024-11-19 08:49:00.091900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.980 ms, result 0 00:27:39.274  [2024-11-19T08:49:02.122Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-19T08:49:03.506Z] Copying: 54/1024 [MB] (24 MBps) [2024-11-19T08:49:04.447Z] Copying: 78/1024 [MB] (24 MBps) [2024-11-19T08:49:05.387Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-19T08:49:06.328Z] Copying: 128/1024 [MB] (24 MBps) [2024-11-19T08:49:07.269Z] Copying: 153/1024 [MB] (24 MBps) [2024-11-19T08:49:08.224Z] Copying: 178/1024 [MB] (24 MBps) [2024-11-19T08:49:09.166Z] Copying: 203/1024 [MB] (24 MBps) [2024-11-19T08:49:10.104Z] Copying: 227/1024 [MB] (24 MBps) [2024-11-19T08:49:11.488Z] Copying: 252/1024 [MB] (24 MBps) [2024-11-19T08:49:12.429Z] Copying: 276/1024 [MB] (24 MBps) [2024-11-19T08:49:13.370Z] Copying: 301/1024 [MB] (24 MBps) [2024-11-19T08:49:14.311Z] Copying: 326/1024 [MB] (24 MBps) [2024-11-19T08:49:15.252Z] Copying: 350/1024 [MB] (24 MBps) [2024-11-19T08:49:16.192Z] Copying: 375/1024 [MB] (24 MBps) [2024-11-19T08:49:17.132Z] Copying: 399/1024 [MB] (24 MBps) [2024-11-19T08:49:18.514Z] Copying: 424/1024 [MB] (24 MBps) [2024-11-19T08:49:19.083Z] Copying: 450/1024 [MB] (25 MBps) [2024-11-19T08:49:20.465Z] Copying: 476/1024 [MB] (25 MBps) [2024-11-19T08:49:21.407Z] Copying: 501/1024 [MB] (25 MBps) [2024-11-19T08:49:22.347Z] Copying: 526/1024 [MB] (24 MBps) [2024-11-19T08:49:23.288Z] Copying: 551/1024 [MB] (24 MBps) [2024-11-19T08:49:24.228Z] Copying: 576/1024 [MB] (24 MBps) [2024-11-19T08:49:25.167Z] Copying: 600/1024 [MB] (24 MBps) [2024-11-19T08:49:26.106Z] Copying: 624/1024 [MB] (24 MBps) [2024-11-19T08:49:27.491Z] Copying: 648/1024 [MB] (24 MBps) [2024-11-19T08:49:28.077Z] Copying: 673/1024 [MB] (24 MBps) [2024-11-19T08:49:29.460Z] Copying: 697/1024 [MB] (23 MBps) [2024-11-19T08:49:30.401Z] Copying: 722/1024 [MB] (25 MBps) [2024-11-19T08:49:31.343Z] Copying: 747/1024 [MB] (25 MBps) [2024-11-19T08:49:32.284Z] Copying: 771/1024 [MB] (24 MBps) [2024-11-19T08:49:33.223Z] Copying: 796/1024 [MB] (24 MBps) [2024-11-19T08:49:34.163Z] Copying: 821/1024 [MB] (24 MBps) [2024-11-19T08:49:35.103Z] Copying: 846/1024 [MB] (25 MBps) [2024-11-19T08:49:36.043Z] Copying: 872/1024 [MB] (25 MBps) [2024-11-19T08:49:37.428Z] Copying: 898/1024 [MB] (25 MBps) [2024-11-19T08:49:38.370Z] Copying: 923/1024 [MB] (25 MBps) [2024-11-19T08:49:39.311Z] Copying: 949/1024 [MB] (25 MBps) [2024-11-19T08:49:40.250Z] Copying: 974/1024 [MB] (25 MBps) [2024-11-19T08:49:41.192Z] Copying: 999/1024 [MB] (25 MBps) [2024-11-19T08:49:41.192Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 08:49:41.000884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.285 [2024-11-19 08:49:41.000991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:19.285 [2024-11-19 08:49:41.001034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:19.285 [2024-11-19 08:49:41.001057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.285 [2024-11-19 08:49:41.001124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:19.285 [2024-11-19 08:49:41.001874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.285 [2024-11-19 08:49:41.001924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:19.285 [2024-11-19 08:49:41.001965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:28:19.285 [2024-11-19 08:49:41.002008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.285 [2024-11-19 08:49:41.004046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.285 [2024-11-19 08:49:41.004116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:19.285 [2024-11-19 08:49:41.004149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 00:28:19.285 [2024-11-19 08:49:41.004172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.285 [2024-11-19 08:49:41.004241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.285 [2024-11-19 08:49:41.004276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:19.285 [2024-11-19 08:49:41.004319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.285 [2024-11-19 08:49:41.004366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.285 [2024-11-19 08:49:41.004436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.285 [2024-11-19 08:49:41.004473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:19.285 [2024-11-19 08:49:41.004502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:19.285 [2024-11-19 08:49:41.004531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.285 [2024-11-19 08:49:41.004565] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:19.285 [2024-11-19 08:49:41.004602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.004975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:19.285 [2024-11-19 08:49:41.005702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.005753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.005797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.005845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.005889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.005944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.005991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.006996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:19.286 [2024-11-19 08:49:41.007211] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:19.286 [2024-11-19 08:49:41.007220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cab91ae8-5b37-48ec-8dc8-334c406b4da9 00:28:19.286 [2024-11-19 08:49:41.007227] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:19.286 [2024-11-19 08:49:41.007233] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:19.286 [2024-11-19 08:49:41.007239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:19.286 [2024-11-19 08:49:41.007246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:19.286 [2024-11-19 08:49:41.007252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:19.286 [2024-11-19 08:49:41.007259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:19.286 [2024-11-19 08:49:41.007266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:19.286 [2024-11-19 08:49:41.007272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:19.286 [2024-11-19 08:49:41.007277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:19.286 [2024-11-19 08:49:41.007284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.286 [2024-11-19 08:49:41.007292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:19.286 [2024-11-19 08:49:41.007300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:28:19.286 [2024-11-19 08:49:41.007306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.286 [2024-11-19 08:49:41.009015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.286 [2024-11-19 08:49:41.009041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:19.287 [2024-11-19 08:49:41.009049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:28:19.287 [2024-11-19 08:49:41.009057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.009162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.287 [2024-11-19 08:49:41.009170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:19.287 [2024-11-19 08:49:41.009177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:19.287 [2024-11-19 08:49:41.009186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.014976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.015031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.287 [2024-11-19 08:49:41.015056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.015087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.015142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.015161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.287 [2024-11-19 08:49:41.015180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.015201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.015268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.015300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.287 [2024-11-19 08:49:41.015337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.015357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.015401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.015422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.287 [2024-11-19 08:49:41.015442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.015463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.028159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.028270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.287 [2024-11-19 08:49:41.028297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.028316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.036282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.036363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.287 [2024-11-19 08:49:41.036390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.036427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.036491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.036513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.287 [2024-11-19 08:49:41.036532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.036549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.036581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.036638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.287 [2024-11-19 08:49:41.036664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.036695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.036778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.036820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.287 [2024-11-19 08:49:41.036848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.036866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.036920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.036965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:19.287 [2024-11-19 08:49:41.036994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.037013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.037069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.037098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.287 [2024-11-19 08:49:41.037127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.037155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.037207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.287 [2024-11-19 08:49:41.037243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.287 [2024-11-19 08:49:41.037269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.287 [2024-11-19 08:49:41.037287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.287 [2024-11-19 08:49:41.037436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 36.579 ms, result 0 00:28:20.228 00:28:20.228 00:28:20.488 08:49:42 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:20.488 [2024-11-19 08:49:42.232105] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:20.488 [2024-11-19 08:49:42.232313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92733 ] 00:28:20.488 [2024-11-19 08:49:42.364907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.488 [2024-11-19 08:49:42.389195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.749 [2024-11-19 08:49:42.490412] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:20.749 [2024-11-19 08:49:42.490564] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:20.749 [2024-11-19 08:49:42.643964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.644076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:20.749 [2024-11-19 08:49:42.644111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:20.749 [2024-11-19 08:49:42.644131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.644196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.644224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.749 [2024-11-19 08:49:42.644252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:20.749 [2024-11-19 08:49:42.644294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.644346] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:20.749 [2024-11-19 08:49:42.644575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:20.749 [2024-11-19 08:49:42.644642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.644673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.749 [2024-11-19 08:49:42.644694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:28:20.749 [2024-11-19 08:49:42.644759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.645077] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:20.749 [2024-11-19 08:49:42.645135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.645155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:20.749 [2024-11-19 08:49:42.645186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:20.749 [2024-11-19 08:49:42.645205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.645268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.645343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:20.749 [2024-11-19 08:49:42.645370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:20.749 [2024-11-19 08:49:42.645400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.645638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.645689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.749 [2024-11-19 08:49:42.645727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:28:20.749 [2024-11-19 08:49:42.645749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.645859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.645897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.749 [2024-11-19 08:49:42.645924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:20.749 [2024-11-19 08:49:42.645950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.645986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.646022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:20.749 [2024-11-19 08:49:42.646053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:20.749 [2024-11-19 08:49:42.646083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.646114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:20.749 [2024-11-19 08:49:42.647815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.647870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.749 [2024-11-19 08:49:42.647899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:28:20.749 [2024-11-19 08:49:42.647918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.647968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.749 [2024-11-19 08:49:42.647991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:20.749 [2024-11-19 08:49:42.648022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:20.749 [2024-11-19 08:49:42.648041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.749 [2024-11-19 08:49:42.648146] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:20.749 [2024-11-19 08:49:42.648192] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:20.749 [2024-11-19 08:49:42.648263] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:20.749 [2024-11-19 08:49:42.648327] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:20.749 [2024-11-19 08:49:42.648451] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:20.749 [2024-11-19 08:49:42.648494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:20.749 [2024-11-19 08:49:42.648535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:20.749 [2024-11-19 08:49:42.648587] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:20.749 [2024-11-19 08:49:42.648640] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:20.749 [2024-11-19 08:49:42.648677] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:20.749 [2024-11-19 08:49:42.648696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:20.749 [2024-11-19 08:49:42.648747] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:20.749 [2024-11-19 08:49:42.648779] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:20.750 [2024-11-19 08:49:42.648807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.750 [2024-11-19 08:49:42.648837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:20.750 [2024-11-19 08:49:42.648870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:28:20.750 [2024-11-19 08:49:42.648879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.750 [2024-11-19 08:49:42.648951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.750 [2024-11-19 08:49:42.648970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:20.750 [2024-11-19 08:49:42.648978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:20.750 [2024-11-19 08:49:42.648989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.750 [2024-11-19 08:49:42.649073] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:20.750 [2024-11-19 08:49:42.649086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:20.750 [2024-11-19 08:49:42.649097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:20.750 [2024-11-19 08:49:42.649121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:20.750 [2024-11-19 08:49:42.649141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.750 [2024-11-19 08:49:42.649153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:20.750 [2024-11-19 08:49:42.649160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:20.750 [2024-11-19 08:49:42.649166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.750 [2024-11-19 08:49:42.649172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:20.750 [2024-11-19 08:49:42.649179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:20.750 [2024-11-19 08:49:42.649184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:20.750 [2024-11-19 08:49:42.649198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:20.750 [2024-11-19 08:49:42.649218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:20.750 [2024-11-19 08:49:42.649236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:20.750 [2024-11-19 08:49:42.649253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:20.750 [2024-11-19 08:49:42.649272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:20.750 [2024-11-19 08:49:42.649292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.750 [2024-11-19 08:49:42.649309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:20.750 [2024-11-19 08:49:42.649315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:20.750 [2024-11-19 08:49:42.649321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.750 [2024-11-19 08:49:42.649328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:20.750 [2024-11-19 08:49:42.649334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:20.750 [2024-11-19 08:49:42.649340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:20.750 [2024-11-19 08:49:42.649353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:20.750 [2024-11-19 08:49:42.649359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649366] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:20.750 [2024-11-19 08:49:42.649373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:20.750 [2024-11-19 08:49:42.649380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.750 [2024-11-19 08:49:42.649396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:20.750 [2024-11-19 08:49:42.649403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:20.750 [2024-11-19 08:49:42.649409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:20.750 [2024-11-19 08:49:42.649417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:20.750 [2024-11-19 08:49:42.649423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:20.750 [2024-11-19 08:49:42.649429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:20.750 [2024-11-19 08:49:42.649437] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:20.750 [2024-11-19 08:49:42.649446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:20.750 [2024-11-19 08:49:42.649461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:20.750 [2024-11-19 08:49:42.649468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:20.750 [2024-11-19 08:49:42.649474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:20.750 [2024-11-19 08:49:42.649481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:20.750 [2024-11-19 08:49:42.649488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:20.750 [2024-11-19 08:49:42.649494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:20.750 [2024-11-19 08:49:42.649501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:20.750 [2024-11-19 08:49:42.649508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:20.750 [2024-11-19 08:49:42.649515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:20.750 [2024-11-19 08:49:42.649561] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:20.750 [2024-11-19 08:49:42.649569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:20.750 [2024-11-19 08:49:42.649583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:20.750 [2024-11-19 08:49:42.649592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:20.750 [2024-11-19 08:49:42.649599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:20.750 [2024-11-19 08:49:42.649606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.750 [2024-11-19 08:49:42.649613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:20.750 [2024-11-19 08:49:42.649621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:28:20.750 [2024-11-19 08:49:42.649629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.657793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.657853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.011 [2024-11-19 08:49:42.657884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.143 ms 00:28:21.011 [2024-11-19 08:49:42.657905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.657989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.658023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:21.011 [2024-11-19 08:49:42.658041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:21.011 [2024-11-19 08:49:42.658078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.684605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.684903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.011 [2024-11-19 08:49:42.685071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.501 ms 00:28:21.011 [2024-11-19 08:49:42.685198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.685433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.685609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.011 [2024-11-19 08:49:42.685746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:21.011 [2024-11-19 08:49:42.685856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.686274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.686446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.011 [2024-11-19 08:49:42.686570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:28:21.011 [2024-11-19 08:49:42.686671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.687167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.687322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.011 [2024-11-19 08:49:42.687441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:28:21.011 [2024-11-19 08:49:42.687546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.698212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.698342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.011 [2024-11-19 08:49:42.698454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.544 ms 00:28:21.011 [2024-11-19 08:49:42.698517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.698917] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:21.011 [2024-11-19 08:49:42.699050] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:21.011 [2024-11-19 08:49:42.699146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.699197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:21.011 [2024-11-19 08:49:42.699317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:28:21.011 [2024-11-19 08:49:42.699388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.716616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.716688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:21.011 [2024-11-19 08:49:42.716733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.168 ms 00:28:21.011 [2024-11-19 08:49:42.716761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.716902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.716989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:21.011 [2024-11-19 08:49:42.717057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:21.011 [2024-11-19 08:49:42.717083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.717178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.717224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:21.011 [2024-11-19 08:49:42.717266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:21.011 [2024-11-19 08:49:42.717302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.717654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.717715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:21.011 [2024-11-19 08:49:42.717777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:21.011 [2024-11-19 08:49:42.717803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.717843] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:21.011 [2024-11-19 08:49:42.717899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.717956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:21.011 [2024-11-19 08:49:42.717994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:21.011 [2024-11-19 08:49:42.718024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.725138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:21.011 [2024-11-19 08:49:42.725296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.725324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:21.011 [2024-11-19 08:49:42.725385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.246 ms 00:28:21.011 [2024-11-19 08:49:42.725405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.727307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.727367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:21.011 [2024-11-19 08:49:42.727392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.859 ms 00:28:21.011 [2024-11-19 08:49:42.727414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.727499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.727535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:21.011 [2024-11-19 08:49:42.727555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:21.011 [2024-11-19 08:49:42.727583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.727643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.011 [2024-11-19 08:49:42.727677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:21.011 [2024-11-19 08:49:42.727746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:21.011 [2024-11-19 08:49:42.727786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.011 [2024-11-19 08:49:42.727837] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:21.011 [2024-11-19 08:49:42.727871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.012 [2024-11-19 08:49:42.727892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:21.012 [2024-11-19 08:49:42.727935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:21.012 [2024-11-19 08:49:42.727944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.012 [2024-11-19 08:49:42.732073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.012 [2024-11-19 08:49:42.732145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:21.012 [2024-11-19 08:49:42.732173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.118 ms 00:28:21.012 [2024-11-19 08:49:42.732193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.012 [2024-11-19 08:49:42.732271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.012 [2024-11-19 08:49:42.732296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:21.012 [2024-11-19 08:49:42.732328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:21.012 [2024-11-19 08:49:42.732347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.012 [2024-11-19 08:49:42.733546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.292 ms, result 0 00:28:22.394  [2024-11-19T08:49:45.240Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-19T08:49:46.179Z] Copying: 58/1024 [MB] (29 MBps) [2024-11-19T08:49:47.120Z] Copying: 87/1024 [MB] (28 MBps) [2024-11-19T08:49:48.102Z] Copying: 115/1024 [MB] (28 MBps) [2024-11-19T08:49:49.041Z] Copying: 145/1024 [MB] (29 MBps) [2024-11-19T08:49:49.978Z] Copying: 175/1024 [MB] (30 MBps) [2024-11-19T08:49:50.916Z] Copying: 206/1024 [MB] (30 MBps) [2024-11-19T08:49:52.297Z] Copying: 236/1024 [MB] (30 MBps) [2024-11-19T08:49:53.236Z] Copying: 266/1024 [MB] (30 MBps) [2024-11-19T08:49:54.177Z] Copying: 297/1024 [MB] (30 MBps) [2024-11-19T08:49:55.116Z] Copying: 328/1024 [MB] (30 MBps) [2024-11-19T08:49:56.055Z] Copying: 358/1024 [MB] (30 MBps) [2024-11-19T08:49:56.995Z] Copying: 387/1024 [MB] (28 MBps) [2024-11-19T08:49:57.934Z] Copying: 415/1024 [MB] (28 MBps) [2024-11-19T08:49:58.873Z] Copying: 444/1024 [MB] (28 MBps) [2024-11-19T08:50:00.252Z] Copying: 473/1024 [MB] (28 MBps) [2024-11-19T08:50:01.192Z] Copying: 502/1024 [MB] (28 MBps) [2024-11-19T08:50:02.130Z] Copying: 531/1024 [MB] (29 MBps) [2024-11-19T08:50:03.070Z] Copying: 561/1024 [MB] (29 MBps) [2024-11-19T08:50:04.008Z] Copying: 590/1024 [MB] (29 MBps) [2024-11-19T08:50:04.948Z] Copying: 619/1024 [MB] (29 MBps) [2024-11-19T08:50:05.888Z] Copying: 648/1024 [MB] (29 MBps) [2024-11-19T08:50:07.273Z] Copying: 677/1024 [MB] (28 MBps) [2024-11-19T08:50:07.858Z] Copying: 706/1024 [MB] (28 MBps) [2024-11-19T08:50:09.240Z] Copying: 735/1024 [MB] (28 MBps) [2024-11-19T08:50:10.180Z] Copying: 763/1024 [MB] (28 MBps) [2024-11-19T08:50:11.120Z] Copying: 792/1024 [MB] (28 MBps) [2024-11-19T08:50:12.060Z] Copying: 819/1024 [MB] (27 MBps) [2024-11-19T08:50:12.999Z] Copying: 847/1024 [MB] (28 MBps) [2024-11-19T08:50:13.937Z] Copying: 876/1024 [MB] (28 MBps) [2024-11-19T08:50:14.876Z] Copying: 904/1024 [MB] (28 MBps) [2024-11-19T08:50:16.259Z] Copying: 932/1024 [MB] (27 MBps) [2024-11-19T08:50:16.829Z] Copying: 960/1024 [MB] (28 MBps) [2024-11-19T08:50:18.212Z] Copying: 988/1024 [MB] (27 MBps) [2024-11-19T08:50:18.212Z] Copying: 1017/1024 [MB] (28 MBps) [2024-11-19T08:50:18.212Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-19 08:50:18.143302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.305 [2024-11-19 08:50:18.143396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.305 [2024-11-19 08:50:18.143425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:56.305 [2024-11-19 08:50:18.143444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.305 [2024-11-19 08:50:18.143487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.305 [2024-11-19 08:50:18.144371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.305 [2024-11-19 08:50:18.144398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.305 [2024-11-19 08:50:18.144418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:28:56.305 [2024-11-19 08:50:18.144434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.305 [2024-11-19 08:50:18.144875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.305 [2024-11-19 08:50:18.144900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.305 [2024-11-19 08:50:18.144918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:28:56.305 [2024-11-19 08:50:18.144934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.305 [2024-11-19 08:50:18.145014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.305 [2024-11-19 08:50:18.145040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:56.305 [2024-11-19 08:50:18.145079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:56.305 [2024-11-19 08:50:18.145096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.305 [2024-11-19 08:50:18.145512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.305 [2024-11-19 08:50:18.145559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:56.305 [2024-11-19 08:50:18.145577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:28:56.305 [2024-11-19 08:50:18.145593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.305 [2024-11-19 08:50:18.145648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.305 [2024-11-19 08:50:18.145676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.305 [2024-11-19 08:50:18.145989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.146697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.147942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.148982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.149988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.150991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.151088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.151186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.151280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.151376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.306 [2024-11-19 08:50:18.151488] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.306 [2024-11-19 08:50:18.151547] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cab91ae8-5b37-48ec-8dc8-334c406b4da9 00:28:56.306 [2024-11-19 08:50:18.151654] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:56.306 [2024-11-19 08:50:18.151676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:56.306 [2024-11-19 08:50:18.151710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:56.307 [2024-11-19 08:50:18.151746] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:56.307 [2024-11-19 08:50:18.151762] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.307 [2024-11-19 08:50:18.151785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.307 [2024-11-19 08:50:18.151801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.307 [2024-11-19 08:50:18.151816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.307 [2024-11-19 08:50:18.151830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.307 [2024-11-19 08:50:18.151848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.307 [2024-11-19 08:50:18.151864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.307 [2024-11-19 08:50:18.151882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:28:56.307 [2024-11-19 08:50:18.151897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.154248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.307 [2024-11-19 08:50:18.154291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.307 [2024-11-19 08:50:18.154326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.313 ms 00:28:56.307 [2024-11-19 08:50:18.154366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.154507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.307 [2024-11-19 08:50:18.154540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.307 [2024-11-19 08:50:18.154559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:56.307 [2024-11-19 08:50:18.154581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.162478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.162520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.307 [2024-11-19 08:50:18.162535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.162546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.162613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.162625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.307 [2024-11-19 08:50:18.162636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.162653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.162693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.162709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.307 [2024-11-19 08:50:18.162735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.162745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.162767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.162779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.307 [2024-11-19 08:50:18.162789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.162799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.176314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.176358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.307 [2024-11-19 08:50:18.176369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.176376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.307 [2024-11-19 08:50:18.185261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.307 [2024-11-19 08:50:18.185339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.307 [2024-11-19 08:50:18.185393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.307 [2024-11-19 08:50:18.185473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:56.307 [2024-11-19 08:50:18.185526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.307 [2024-11-19 08:50:18.185594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.307 [2024-11-19 08:50:18.185647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.307 [2024-11-19 08:50:18.185654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.307 [2024-11-19 08:50:18.185661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.307 [2024-11-19 08:50:18.185800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.602 ms, result 0 00:28:56.567 00:28:56.567 00:28:56.567 08:50:18 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:58.473 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:58.473 08:50:20 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:58.473 [2024-11-19 08:50:20.103216] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:28:58.473 [2024-11-19 08:50:20.103339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93108 ] 00:28:58.473 [2024-11-19 08:50:20.259855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.473 [2024-11-19 08:50:20.288380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.735 [2024-11-19 08:50:20.391548] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:58.735 [2024-11-19 08:50:20.391608] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:58.735 [2024-11-19 08:50:20.544896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.735 [2024-11-19 08:50:20.544941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:58.735 [2024-11-19 08:50:20.544956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:58.735 [2024-11-19 08:50:20.544970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.735 [2024-11-19 08:50:20.545018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.735 [2024-11-19 08:50:20.545027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.735 [2024-11-19 08:50:20.545041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:58.735 [2024-11-19 08:50:20.545055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.735 [2024-11-19 08:50:20.545076] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:58.735 [2024-11-19 08:50:20.545270] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:58.735 [2024-11-19 08:50:20.545287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.735 [2024-11-19 08:50:20.545295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.736 [2024-11-19 08:50:20.545303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:28:58.736 [2024-11-19 08:50:20.545321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.545560] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:58.736 [2024-11-19 08:50:20.545576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.545584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:58.736 [2024-11-19 08:50:20.545592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:58.736 [2024-11-19 08:50:20.545599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.545669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.545681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:58.736 [2024-11-19 08:50:20.545690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:58.736 [2024-11-19 08:50:20.545698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.545924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.545935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.736 [2024-11-19 08:50:20.545943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:28:58.736 [2024-11-19 08:50:20.545958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.546035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.546049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.736 [2024-11-19 08:50:20.546056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:58.736 [2024-11-19 08:50:20.546063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.546083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.546091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:58.736 [2024-11-19 08:50:20.546098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:58.736 [2024-11-19 08:50:20.546105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.546128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:58.736 [2024-11-19 08:50:20.547724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.547740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.736 [2024-11-19 08:50:20.547748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:28:58.736 [2024-11-19 08:50:20.547756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.547785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.547794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:58.736 [2024-11-19 08:50:20.547802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:58.736 [2024-11-19 08:50:20.547809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.547825] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:58.736 [2024-11-19 08:50:20.547841] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:58.736 [2024-11-19 08:50:20.547879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:58.736 [2024-11-19 08:50:20.547900] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:58.736 [2024-11-19 08:50:20.547977] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:58.736 [2024-11-19 08:50:20.547987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:58.736 [2024-11-19 08:50:20.547997] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:58.736 [2024-11-19 08:50:20.548007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548034] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:58.736 [2024-11-19 08:50:20.548041] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:58.736 [2024-11-19 08:50:20.548047] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:58.736 [2024-11-19 08:50:20.548054] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:58.736 [2024-11-19 08:50:20.548061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.548068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:58.736 [2024-11-19 08:50:20.548076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:28:58.736 [2024-11-19 08:50:20.548083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.548144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.736 [2024-11-19 08:50:20.548153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:58.736 [2024-11-19 08:50:20.548162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:58.736 [2024-11-19 08:50:20.548171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.736 [2024-11-19 08:50:20.548253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:58.736 [2024-11-19 08:50:20.548264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:58.736 [2024-11-19 08:50:20.548271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:58.736 [2024-11-19 08:50:20.548295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:58.736 [2024-11-19 08:50:20.548313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.736 [2024-11-19 08:50:20.548326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:58.736 [2024-11-19 08:50:20.548332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:58.736 [2024-11-19 08:50:20.548338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.736 [2024-11-19 08:50:20.548344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:58.736 [2024-11-19 08:50:20.548350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:58.736 [2024-11-19 08:50:20.548356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:58.736 [2024-11-19 08:50:20.548368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:58.736 [2024-11-19 08:50:20.548388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:58.736 [2024-11-19 08:50:20.548405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:58.736 [2024-11-19 08:50:20.548422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:58.736 [2024-11-19 08:50:20.548438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:58.736 [2024-11-19 08:50:20.548445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.736 [2024-11-19 08:50:20.548451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:58.736 [2024-11-19 08:50:20.548457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:58.737 [2024-11-19 08:50:20.548462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.737 [2024-11-19 08:50:20.548467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:58.737 [2024-11-19 08:50:20.548478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:58.737 [2024-11-19 08:50:20.548485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.737 [2024-11-19 08:50:20.548491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:58.737 [2024-11-19 08:50:20.548496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:58.737 [2024-11-19 08:50:20.548502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.737 [2024-11-19 08:50:20.548508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:58.737 [2024-11-19 08:50:20.548514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:58.737 [2024-11-19 08:50:20.548519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.737 [2024-11-19 08:50:20.548526] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:58.737 [2024-11-19 08:50:20.548540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:58.737 [2024-11-19 08:50:20.548546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.737 [2024-11-19 08:50:20.548554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.737 [2024-11-19 08:50:20.548563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:58.737 [2024-11-19 08:50:20.548569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:58.737 [2024-11-19 08:50:20.548575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:58.737 [2024-11-19 08:50:20.548581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:58.737 [2024-11-19 08:50:20.548589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:58.737 [2024-11-19 08:50:20.548596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:58.737 [2024-11-19 08:50:20.548603] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:58.737 [2024-11-19 08:50:20.548611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:58.737 [2024-11-19 08:50:20.548626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:58.737 [2024-11-19 08:50:20.548633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:58.737 [2024-11-19 08:50:20.548639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:58.737 [2024-11-19 08:50:20.548646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:58.737 [2024-11-19 08:50:20.548653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:58.737 [2024-11-19 08:50:20.548659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:58.737 [2024-11-19 08:50:20.548667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:58.737 [2024-11-19 08:50:20.548673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:58.737 [2024-11-19 08:50:20.548679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:58.737 [2024-11-19 08:50:20.548731] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:58.737 [2024-11-19 08:50:20.548739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:58.737 [2024-11-19 08:50:20.548754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:58.737 [2024-11-19 08:50:20.548761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:58.737 [2024-11-19 08:50:20.548768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:58.737 [2024-11-19 08:50:20.548776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.548783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:58.737 [2024-11-19 08:50:20.548806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:28:58.737 [2024-11-19 08:50:20.548812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.556582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.556615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.737 [2024-11-19 08:50:20.556629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.723 ms 00:28:58.737 [2024-11-19 08:50:20.556637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.556711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.556720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:58.737 [2024-11-19 08:50:20.556727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:58.737 [2024-11-19 08:50:20.556748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.576997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.577089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.737 [2024-11-19 08:50:20.577116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.232 ms 00:28:58.737 [2024-11-19 08:50:20.577134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.577202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.577224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.737 [2024-11-19 08:50:20.577243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:58.737 [2024-11-19 08:50:20.577262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.577461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.577490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.737 [2024-11-19 08:50:20.577517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:58.737 [2024-11-19 08:50:20.577534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.577817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.577848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.737 [2024-11-19 08:50:20.577897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:28:58.737 [2024-11-19 08:50:20.577932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.586590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.586636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.737 [2024-11-19 08:50:20.586653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.630 ms 00:28:58.737 [2024-11-19 08:50:20.586670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.737 [2024-11-19 08:50:20.586855] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:58.737 [2024-11-19 08:50:20.586890] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:58.737 [2024-11-19 08:50:20.586907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.737 [2024-11-19 08:50:20.586920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:58.737 [2024-11-19 08:50:20.586934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:58.738 [2024-11-19 08:50:20.586945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.600949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.601071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:58.738 [2024-11-19 08:50:20.601088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.995 ms 00:28:58.738 [2024-11-19 08:50:20.601101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.601218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.601229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:58.738 [2024-11-19 08:50:20.601250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:58.738 [2024-11-19 08:50:20.601259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.601311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.601322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:58.738 [2024-11-19 08:50:20.601335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:58.738 [2024-11-19 08:50:20.601342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.601625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.601638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:58.738 [2024-11-19 08:50:20.601647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:28:58.738 [2024-11-19 08:50:20.601660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.601679] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:58.738 [2024-11-19 08:50:20.601690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.601699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:58.738 [2024-11-19 08:50:20.601747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:58.738 [2024-11-19 08:50:20.601761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.608329] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:58.738 [2024-11-19 08:50:20.608501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.608516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:58.738 [2024-11-19 08:50:20.608525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.729 ms 00:28:58.738 [2024-11-19 08:50:20.608532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.610342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.610376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:58.738 [2024-11-19 08:50:20.610384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:28:58.738 [2024-11-19 08:50:20.610392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.610475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.610486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:58.738 [2024-11-19 08:50:20.610494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:58.738 [2024-11-19 08:50:20.610502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.610538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.610553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:58.738 [2024-11-19 08:50:20.610561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:58.738 [2024-11-19 08:50:20.610567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.610593] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:58.738 [2024-11-19 08:50:20.610601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.610617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:58.738 [2024-11-19 08:50:20.610624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:58.738 [2024-11-19 08:50:20.610630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.614589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.614627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:58.738 [2024-11-19 08:50:20.614638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:28:58.738 [2024-11-19 08:50:20.614645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.614712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.738 [2024-11-19 08:50:20.614738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:58.738 [2024-11-19 08:50:20.614746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:58.738 [2024-11-19 08:50:20.614753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.738 [2024-11-19 08:50:20.615704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 70.566 ms, result 0 00:29:00.121  [2024-11-19T08:50:22.968Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-19T08:50:23.912Z] Copying: 50/1024 [MB] (24 MBps) [2024-11-19T08:50:24.851Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-19T08:50:25.792Z] Copying: 102/1024 [MB] (26 MBps) [2024-11-19T08:50:26.733Z] Copying: 127/1024 [MB] (25 MBps) [2024-11-19T08:50:27.673Z] Copying: 153/1024 [MB] (25 MBps) [2024-11-19T08:50:28.650Z] Copying: 178/1024 [MB] (25 MBps) [2024-11-19T08:50:30.032Z] Copying: 203/1024 [MB] (25 MBps) [2024-11-19T08:50:30.970Z] Copying: 228/1024 [MB] (24 MBps) [2024-11-19T08:50:31.910Z] Copying: 254/1024 [MB] (25 MBps) [2024-11-19T08:50:32.849Z] Copying: 279/1024 [MB] (25 MBps) [2024-11-19T08:50:33.789Z] Copying: 305/1024 [MB] (25 MBps) [2024-11-19T08:50:34.730Z] Copying: 330/1024 [MB] (25 MBps) [2024-11-19T08:50:35.670Z] Copying: 355/1024 [MB] (25 MBps) [2024-11-19T08:50:36.611Z] Copying: 380/1024 [MB] (24 MBps) [2024-11-19T08:50:37.993Z] Copying: 405/1024 [MB] (24 MBps) [2024-11-19T08:50:38.932Z] Copying: 430/1024 [MB] (25 MBps) [2024-11-19T08:50:39.871Z] Copying: 455/1024 [MB] (25 MBps) [2024-11-19T08:50:40.812Z] Copying: 480/1024 [MB] (25 MBps) [2024-11-19T08:50:41.753Z] Copying: 504/1024 [MB] (24 MBps) [2024-11-19T08:50:42.694Z] Copying: 529/1024 [MB] (24 MBps) [2024-11-19T08:50:43.633Z] Copying: 554/1024 [MB] (24 MBps) [2024-11-19T08:50:45.016Z] Copying: 579/1024 [MB] (24 MBps) [2024-11-19T08:50:45.586Z] Copying: 604/1024 [MB] (24 MBps) [2024-11-19T08:50:46.969Z] Copying: 629/1024 [MB] (24 MBps) [2024-11-19T08:50:47.909Z] Copying: 653/1024 [MB] (24 MBps) [2024-11-19T08:50:48.864Z] Copying: 677/1024 [MB] (23 MBps) [2024-11-19T08:50:49.808Z] Copying: 700/1024 [MB] (23 MBps) [2024-11-19T08:50:50.749Z] Copying: 724/1024 [MB] (24 MBps) [2024-11-19T08:50:51.690Z] Copying: 748/1024 [MB] (23 MBps) [2024-11-19T08:50:52.631Z] Copying: 772/1024 [MB] (24 MBps) [2024-11-19T08:50:53.569Z] Copying: 797/1024 [MB] (25 MBps) [2024-11-19T08:50:54.951Z] Copying: 822/1024 [MB] (24 MBps) [2024-11-19T08:50:55.891Z] Copying: 845/1024 [MB] (23 MBps) [2024-11-19T08:50:56.832Z] Copying: 869/1024 [MB] (23 MBps) [2024-11-19T08:50:57.771Z] Copying: 893/1024 [MB] (23 MBps) [2024-11-19T08:50:58.711Z] Copying: 916/1024 [MB] (23 MBps) [2024-11-19T08:50:59.652Z] Copying: 941/1024 [MB] (24 MBps) [2024-11-19T08:51:00.592Z] Copying: 964/1024 [MB] (23 MBps) [2024-11-19T08:51:01.974Z] Copying: 988/1024 [MB] (23 MBps) [2024-11-19T08:51:02.991Z] Copying: 1012/1024 [MB] (23 MBps) [2024-11-19T08:51:02.991Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-19T08:51:02.991Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 08:51:02.838531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.084 [2024-11-19 08:51:02.838599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:41.084 [2024-11-19 08:51:02.838614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:41.084 [2024-11-19 08:51:02.838622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.084 [2024-11-19 08:51:02.841771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:41.084 [2024-11-19 08:51:02.843667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.084 [2024-11-19 08:51:02.843746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:41.084 [2024-11-19 08:51:02.843778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:29:41.084 [2024-11-19 08:51:02.843800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.084 [2024-11-19 08:51:02.852840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.084 [2024-11-19 08:51:02.852918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:41.084 [2024-11-19 08:51:02.852945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.285 ms 00:29:41.084 [2024-11-19 08:51:02.852964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.084 [2024-11-19 08:51:02.853033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.084 [2024-11-19 08:51:02.853055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:41.084 [2024-11-19 08:51:02.853075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:41.084 [2024-11-19 08:51:02.853116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.084 [2024-11-19 08:51:02.853193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.084 [2024-11-19 08:51:02.853216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:41.084 [2024-11-19 08:51:02.853256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:41.084 [2024-11-19 08:51:02.853282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.084 [2024-11-19 08:51:02.853326] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:41.084 [2024-11-19 08:51:02.853376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:29:41.084 [2024-11-19 08:51:02.853418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.853986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:41.084 [2024-11-19 08:51:02.854452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:41.085 [2024-11-19 08:51:02.854885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:41.085 [2024-11-19 08:51:02.854913] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cab91ae8-5b37-48ec-8dc8-334c406b4da9 00:29:41.085 [2024-11-19 08:51:02.854996] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:29:41.085 [2024-11-19 08:51:02.855023] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:29:41.085 [2024-11-19 08:51:02.855042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:29:41.085 [2024-11-19 08:51:02.855065] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:29:41.085 [2024-11-19 08:51:02.855095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:41.085 [2024-11-19 08:51:02.855123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:41.085 [2024-11-19 08:51:02.855173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:41.085 [2024-11-19 08:51:02.855190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:41.085 [2024-11-19 08:51:02.855214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:41.085 [2024-11-19 08:51:02.855254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.085 [2024-11-19 08:51:02.855273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:41.085 [2024-11-19 08:51:02.855308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:29:41.085 [2024-11-19 08:51:02.855335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.857035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.085 [2024-11-19 08:51:02.857085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:41.085 [2024-11-19 08:51:02.857114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:29:41.085 [2024-11-19 08:51:02.857134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.857258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.085 [2024-11-19 08:51:02.857302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:41.085 [2024-11-19 08:51:02.857331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:29:41.085 [2024-11-19 08:51:02.857349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.863125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.863178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:41.085 [2024-11-19 08:51:02.863211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.863221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.863273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.863283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:41.085 [2024-11-19 08:51:02.863290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.863297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.863343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.863354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:41.085 [2024-11-19 08:51:02.863361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.863371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.863386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.863394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:41.085 [2024-11-19 08:51:02.863401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.863407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.876007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.876048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:41.085 [2024-11-19 08:51:02.876062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.876070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.884478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.884511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:41.085 [2024-11-19 08:51:02.884522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.884529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.884592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.884611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:41.085 [2024-11-19 08:51:02.884619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.884626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.884651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.884666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:41.085 [2024-11-19 08:51:02.884674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.085 [2024-11-19 08:51:02.884681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.085 [2024-11-19 08:51:02.884749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.085 [2024-11-19 08:51:02.884760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:41.085 [2024-11-19 08:51:02.884768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.086 [2024-11-19 08:51:02.884775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.086 [2024-11-19 08:51:02.884800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.086 [2024-11-19 08:51:02.884813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:41.086 [2024-11-19 08:51:02.884820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.086 [2024-11-19 08:51:02.884828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.086 [2024-11-19 08:51:02.884869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.086 [2024-11-19 08:51:02.884879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:41.086 [2024-11-19 08:51:02.884885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.086 [2024-11-19 08:51:02.884892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.086 [2024-11-19 08:51:02.884933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.086 [2024-11-19 08:51:02.884942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:41.086 [2024-11-19 08:51:02.884949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.086 [2024-11-19 08:51:02.884955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.086 [2024-11-19 08:51:02.885079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.888 ms, result 0 00:29:42.026 00:29:42.026 00:29:42.026 08:51:03 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:42.026 [2024-11-19 08:51:03.805102] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:29:42.026 [2024-11-19 08:51:03.805295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93550 ] 00:29:42.286 [2024-11-19 08:51:03.960929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.286 [2024-11-19 08:51:03.985478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.286 [2024-11-19 08:51:04.087539] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:42.286 [2024-11-19 08:51:04.087604] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:42.548 [2024-11-19 08:51:04.241436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.241483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:42.548 [2024-11-19 08:51:04.241500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:42.548 [2024-11-19 08:51:04.241508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.241563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.241574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:42.548 [2024-11-19 08:51:04.241582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:42.548 [2024-11-19 08:51:04.241589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.241609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:42.548 [2024-11-19 08:51:04.241835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:42.548 [2024-11-19 08:51:04.241853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.241861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:42.548 [2024-11-19 08:51:04.241869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:29:42.548 [2024-11-19 08:51:04.241878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.242119] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:42.548 [2024-11-19 08:51:04.242149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.242159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:42.548 [2024-11-19 08:51:04.242167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:42.548 [2024-11-19 08:51:04.242174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.242247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.242267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:42.548 [2024-11-19 08:51:04.242274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:42.548 [2024-11-19 08:51:04.242289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.242500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.242511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:42.548 [2024-11-19 08:51:04.242531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:29:42.548 [2024-11-19 08:51:04.242545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.242616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.242629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:42.548 [2024-11-19 08:51:04.242638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:42.548 [2024-11-19 08:51:04.242645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.242663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.242672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:42.548 [2024-11-19 08:51:04.242679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:42.548 [2024-11-19 08:51:04.242686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.242701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:42.548 [2024-11-19 08:51:04.244470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.244529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:42.548 [2024-11-19 08:51:04.244554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:29:42.548 [2024-11-19 08:51:04.244575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.244617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.244639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:42.548 [2024-11-19 08:51:04.244665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:42.548 [2024-11-19 08:51:04.244684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.244797] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:42.548 [2024-11-19 08:51:04.244842] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:42.548 [2024-11-19 08:51:04.244916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:42.548 [2024-11-19 08:51:04.244969] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:42.548 [2024-11-19 08:51:04.245082] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:42.548 [2024-11-19 08:51:04.245134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:42.548 [2024-11-19 08:51:04.245189] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:42.548 [2024-11-19 08:51:04.245247] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:42.548 [2024-11-19 08:51:04.245289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:42.548 [2024-11-19 08:51:04.245338] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:42.548 [2024-11-19 08:51:04.245370] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:42.548 [2024-11-19 08:51:04.245410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:42.548 [2024-11-19 08:51:04.245446] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:42.548 [2024-11-19 08:51:04.245474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.245508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:42.548 [2024-11-19 08:51:04.245536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:29:42.548 [2024-11-19 08:51:04.245573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.245675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.548 [2024-11-19 08:51:04.245709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:42.548 [2024-11-19 08:51:04.245763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:42.548 [2024-11-19 08:51:04.245797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.548 [2024-11-19 08:51:04.245920] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:42.548 [2024-11-19 08:51:04.245966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:42.548 [2024-11-19 08:51:04.245995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:42.548 [2024-11-19 08:51:04.246024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.548 [2024-11-19 08:51:04.246056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:42.548 [2024-11-19 08:51:04.246087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:42.548 [2024-11-19 08:51:04.246116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:42.548 [2024-11-19 08:51:04.246137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:42.548 [2024-11-19 08:51:04.246159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:42.548 [2024-11-19 08:51:04.246195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:42.549 [2024-11-19 08:51:04.246225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:42.549 [2024-11-19 08:51:04.246257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:42.549 [2024-11-19 08:51:04.246286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:42.549 [2024-11-19 08:51:04.246312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:42.549 [2024-11-19 08:51:04.246332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:42.549 [2024-11-19 08:51:04.246372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:42.549 [2024-11-19 08:51:04.246429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:42.549 [2024-11-19 08:51:04.246461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:42.549 [2024-11-19 08:51:04.246520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.549 [2024-11-19 08:51:04.246569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:42.549 [2024-11-19 08:51:04.246593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.549 [2024-11-19 08:51:04.246645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:42.549 [2024-11-19 08:51:04.246668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.549 [2024-11-19 08:51:04.246762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:42.549 [2024-11-19 08:51:04.246793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.549 [2024-11-19 08:51:04.246843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:42.549 [2024-11-19 08:51:04.246863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:42.549 [2024-11-19 08:51:04.246898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:42.549 [2024-11-19 08:51:04.246924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:42.549 [2024-11-19 08:51:04.246945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:42.549 [2024-11-19 08:51:04.246973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:42.549 [2024-11-19 08:51:04.246999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:42.549 [2024-11-19 08:51:04.247026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:42.549 [2024-11-19 08:51:04.247055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.549 [2024-11-19 08:51:04.247076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:42.549 [2024-11-19 08:51:04.247103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:42.549 [2024-11-19 08:51:04.247135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.549 [2024-11-19 08:51:04.247161] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:42.549 [2024-11-19 08:51:04.247189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:42.549 [2024-11-19 08:51:04.247216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:42.549 [2024-11-19 08:51:04.247225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.549 [2024-11-19 08:51:04.247236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:42.549 [2024-11-19 08:51:04.247243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:42.549 [2024-11-19 08:51:04.247250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:42.549 [2024-11-19 08:51:04.247256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:42.549 [2024-11-19 08:51:04.247262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:42.549 [2024-11-19 08:51:04.247269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:42.549 [2024-11-19 08:51:04.247277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:42.549 [2024-11-19 08:51:04.247286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:42.549 [2024-11-19 08:51:04.247303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:42.549 [2024-11-19 08:51:04.247310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:42.549 [2024-11-19 08:51:04.247317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:42.549 [2024-11-19 08:51:04.247324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:42.549 [2024-11-19 08:51:04.247331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:42.549 [2024-11-19 08:51:04.247337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:42.549 [2024-11-19 08:51:04.247344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:42.549 [2024-11-19 08:51:04.247350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:42.549 [2024-11-19 08:51:04.247357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:42.549 [2024-11-19 08:51:04.247402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:42.549 [2024-11-19 08:51:04.247409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:42.549 [2024-11-19 08:51:04.247426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:42.549 [2024-11-19 08:51:04.247435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:42.549 [2024-11-19 08:51:04.247442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:42.549 [2024-11-19 08:51:04.247450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.247457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:42.549 [2024-11-19 08:51:04.247464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:29:42.549 [2024-11-19 08:51:04.247470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.255571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.255639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:42.549 [2024-11-19 08:51:04.255672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.076 ms 00:29:42.549 [2024-11-19 08:51:04.255691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.255784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.255808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:42.549 [2024-11-19 08:51:04.255827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:42.549 [2024-11-19 08:51:04.255914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.278588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.278859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:42.549 [2024-11-19 08:51:04.278984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.642 ms 00:29:42.549 [2024-11-19 08:51:04.279094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.279273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.279399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:42.549 [2024-11-19 08:51:04.279498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:42.549 [2024-11-19 08:51:04.279590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.280013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.280160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:42.549 [2024-11-19 08:51:04.280275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:29:42.549 [2024-11-19 08:51:04.280368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.280868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.281030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:42.549 [2024-11-19 08:51:04.281131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:29:42.549 [2024-11-19 08:51:04.281224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.549 [2024-11-19 08:51:04.291748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.549 [2024-11-19 08:51:04.291877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:42.549 [2024-11-19 08:51:04.291948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.410 ms 00:29:42.549 [2024-11-19 08:51:04.292006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.292322] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:42.550 [2024-11-19 08:51:04.292421] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:42.550 [2024-11-19 08:51:04.292445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.292463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:42.550 [2024-11-19 08:51:04.292483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:29:42.550 [2024-11-19 08:51:04.292501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.309575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.309610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:42.550 [2024-11-19 08:51:04.309621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.069 ms 00:29:42.550 [2024-11-19 08:51:04.309630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.309758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.309771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:42.550 [2024-11-19 08:51:04.309795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:29:42.550 [2024-11-19 08:51:04.309807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.309875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.309887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:42.550 [2024-11-19 08:51:04.309907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:42.550 [2024-11-19 08:51:04.309915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.310200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.310230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:42.550 [2024-11-19 08:51:04.310240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:29:42.550 [2024-11-19 08:51:04.310248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.310268] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:42.550 [2024-11-19 08:51:04.310278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.310288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:42.550 [2024-11-19 08:51:04.310297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:42.550 [2024-11-19 08:51:04.310311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.317794] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:42.550 [2024-11-19 08:51:04.317929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.317950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:42.550 [2024-11-19 08:51:04.317960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.612 ms 00:29:42.550 [2024-11-19 08:51:04.317967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.319937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.319967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:42.550 [2024-11-19 08:51:04.319976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.953 ms 00:29:42.550 [2024-11-19 08:51:04.319983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.320048] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:42.550 [2024-11-19 08:51:04.320550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.320562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:42.550 [2024-11-19 08:51:04.320571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:29:42.550 [2024-11-19 08:51:04.320578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.320623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.320634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:42.550 [2024-11-19 08:51:04.320643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:42.550 [2024-11-19 08:51:04.320658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.320691] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:42.550 [2024-11-19 08:51:04.320701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.320709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:42.550 [2024-11-19 08:51:04.320733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:42.550 [2024-11-19 08:51:04.320741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.324909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.325019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:42.550 [2024-11-19 08:51:04.325033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.160 ms 00:29:42.550 [2024-11-19 08:51:04.325041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.325110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.550 [2024-11-19 08:51:04.325120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:42.550 [2024-11-19 08:51:04.325138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:42.550 [2024-11-19 08:51:04.325155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.550 [2024-11-19 08:51:04.326182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.514 ms, result 0 00:29:43.932  [2024-11-19T08:51:06.778Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-19T08:51:07.719Z] Copying: 54/1024 [MB] (27 MBps) [2024-11-19T08:51:08.684Z] Copying: 82/1024 [MB] (28 MBps) [2024-11-19T08:51:09.638Z] Copying: 111/1024 [MB] (28 MBps) [2024-11-19T08:51:10.578Z] Copying: 140/1024 [MB] (28 MBps) [2024-11-19T08:51:11.518Z] Copying: 169/1024 [MB] (28 MBps) [2024-11-19T08:51:12.901Z] Copying: 197/1024 [MB] (28 MBps) [2024-11-19T08:51:13.840Z] Copying: 226/1024 [MB] (28 MBps) [2024-11-19T08:51:14.780Z] Copying: 255/1024 [MB] (28 MBps) [2024-11-19T08:51:15.721Z] Copying: 283/1024 [MB] (27 MBps) [2024-11-19T08:51:16.661Z] Copying: 310/1024 [MB] (27 MBps) [2024-11-19T08:51:17.602Z] Copying: 339/1024 [MB] (28 MBps) [2024-11-19T08:51:18.542Z] Copying: 367/1024 [MB] (28 MBps) [2024-11-19T08:51:19.483Z] Copying: 396/1024 [MB] (28 MBps) [2024-11-19T08:51:20.866Z] Copying: 425/1024 [MB] (29 MBps) [2024-11-19T08:51:21.808Z] Copying: 453/1024 [MB] (28 MBps) [2024-11-19T08:51:22.749Z] Copying: 482/1024 [MB] (28 MBps) [2024-11-19T08:51:23.690Z] Copying: 510/1024 [MB] (28 MBps) [2024-11-19T08:51:24.631Z] Copying: 539/1024 [MB] (28 MBps) [2024-11-19T08:51:25.571Z] Copying: 566/1024 [MB] (27 MBps) [2024-11-19T08:51:26.512Z] Copying: 593/1024 [MB] (27 MBps) [2024-11-19T08:51:27.451Z] Copying: 621/1024 [MB] (27 MBps) [2024-11-19T08:51:28.851Z] Copying: 648/1024 [MB] (27 MBps) [2024-11-19T08:51:29.790Z] Copying: 677/1024 [MB] (28 MBps) [2024-11-19T08:51:30.731Z] Copying: 707/1024 [MB] (29 MBps) [2024-11-19T08:51:31.670Z] Copying: 736/1024 [MB] (29 MBps) [2024-11-19T08:51:32.609Z] Copying: 766/1024 [MB] (29 MBps) [2024-11-19T08:51:33.549Z] Copying: 795/1024 [MB] (28 MBps) [2024-11-19T08:51:34.488Z] Copying: 823/1024 [MB] (28 MBps) [2024-11-19T08:51:35.428Z] Copying: 852/1024 [MB] (29 MBps) [2024-11-19T08:51:36.809Z] Copying: 882/1024 [MB] (29 MBps) [2024-11-19T08:51:37.749Z] Copying: 910/1024 [MB] (28 MBps) [2024-11-19T08:51:38.688Z] Copying: 938/1024 [MB] (27 MBps) [2024-11-19T08:51:39.627Z] Copying: 967/1024 [MB] (28 MBps) [2024-11-19T08:51:40.570Z] Copying: 995/1024 [MB] (28 MBps) [2024-11-19T08:51:40.570Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-19 08:51:40.531618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.663 [2024-11-19 08:51:40.531764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:18.663 [2024-11-19 08:51:40.531802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:18.663 [2024-11-19 08:51:40.531827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.663 [2024-11-19 08:51:40.531881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:18.663 [2024-11-19 08:51:40.532905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.663 [2024-11-19 08:51:40.532959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:18.663 [2024-11-19 08:51:40.532983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:30:18.663 [2024-11-19 08:51:40.533023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.663 [2024-11-19 08:51:40.533557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.663 [2024-11-19 08:51:40.533592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:18.663 [2024-11-19 08:51:40.533615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:30:18.663 [2024-11-19 08:51:40.533636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.663 [2024-11-19 08:51:40.533709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.663 [2024-11-19 08:51:40.533761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:18.663 [2024-11-19 08:51:40.533809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:18.663 [2024-11-19 08:51:40.533829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.663 [2024-11-19 08:51:40.533940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.663 [2024-11-19 08:51:40.533965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:18.663 [2024-11-19 08:51:40.533994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:18.663 [2024-11-19 08:51:40.534014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.663 [2024-11-19 08:51:40.534049] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:18.663 [2024-11-19 08:51:40.534080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:30:18.663 [2024-11-19 08:51:40.534128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:18.663 [2024-11-19 08:51:40.534587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.534993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:18.664 [2024-11-19 08:51:40.535655] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:18.664 [2024-11-19 08:51:40.535676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cab91ae8-5b37-48ec-8dc8-334c406b4da9 00:30:18.664 [2024-11-19 08:51:40.535691] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:30:18.664 [2024-11-19 08:51:40.535704] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:30:18.664 [2024-11-19 08:51:40.535734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:30:18.664 [2024-11-19 08:51:40.535750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:30:18.664 [2024-11-19 08:51:40.535764] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:18.664 [2024-11-19 08:51:40.535784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:18.664 [2024-11-19 08:51:40.535798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:18.664 [2024-11-19 08:51:40.535810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:18.664 [2024-11-19 08:51:40.535822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:18.664 [2024-11-19 08:51:40.535836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.664 [2024-11-19 08:51:40.535851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:18.664 [2024-11-19 08:51:40.535865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:30:18.664 [2024-11-19 08:51:40.535880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.664 [2024-11-19 08:51:40.538923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.664 [2024-11-19 08:51:40.539169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:18.664 [2024-11-19 08:51:40.539189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:30:18.664 [2024-11-19 08:51:40.539211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.664 [2024-11-19 08:51:40.539359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.664 [2024-11-19 08:51:40.539376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:18.664 [2024-11-19 08:51:40.539391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:30:18.664 [2024-11-19 08:51:40.539404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.664 [2024-11-19 08:51:40.547649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.665 [2024-11-19 08:51:40.547695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:18.665 [2024-11-19 08:51:40.547709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.665 [2024-11-19 08:51:40.547749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.665 [2024-11-19 08:51:40.547815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.665 [2024-11-19 08:51:40.547826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:18.665 [2024-11-19 08:51:40.547837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.665 [2024-11-19 08:51:40.547864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.665 [2024-11-19 08:51:40.547945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.665 [2024-11-19 08:51:40.547961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:18.665 [2024-11-19 08:51:40.547977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.665 [2024-11-19 08:51:40.547987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.665 [2024-11-19 08:51:40.548007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.665 [2024-11-19 08:51:40.548018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:18.665 [2024-11-19 08:51:40.548037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.665 [2024-11-19 08:51:40.548047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.665 [2024-11-19 08:51:40.563215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.665 [2024-11-19 08:51:40.563347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:18.665 [2024-11-19 08:51:40.563368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.665 [2024-11-19 08:51:40.563376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.572965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:18.926 [2024-11-19 08:51:40.573026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:18.926 [2024-11-19 08:51:40.573102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:18.926 [2024-11-19 08:51:40.573155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:18.926 [2024-11-19 08:51:40.573236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:18.926 [2024-11-19 08:51:40.573316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:18.926 [2024-11-19 08:51:40.573375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.926 [2024-11-19 08:51:40.573434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:18.926 [2024-11-19 08:51:40.573450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.926 [2024-11-19 08:51:40.573457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.926 [2024-11-19 08:51:40.573587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.056 ms, result 0 00:30:18.926 00:30:18.926 00:30:18.926 08:51:40 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:20.836 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:20.836 Process with pid 92058 is not found 00:30:20.836 Remove shared memory files 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92058 00:30:20.836 08:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92058 ']' 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92058 00:30:20.837 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92058) - No such process 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92058 is not found' 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_band_md /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_l2p_l1 /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_l2p_l2 /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_l2p_l2_ctx /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_nvc_md /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_p2l_pool /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_sb /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_sb_shm /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_trim_bitmap /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_trim_log /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_trim_md /dev/hugepages/ftl_cab91ae8-5b37-48ec-8dc8-334c406b4da9_vmap 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:20.837 00:30:20.837 real 3m1.497s 00:30:20.837 user 2m51.484s 00:30:20.837 sys 0m11.590s 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.837 08:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:20.837 ************************************ 00:30:20.837 END TEST ftl_restore_fast 00:30:20.837 ************************************ 00:30:20.837 08:51:42 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:20.837 08:51:42 ftl -- ftl/ftl.sh@14 -- # killprocess 84905 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 84905 ']' 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@958 -- # kill -0 84905 00:30:20.837 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84905) - No such process 00:30:20.837 Process with pid 84905 is not found 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 84905 is not found' 00:30:20.837 08:51:42 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:20.837 08:51:42 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=93964 00:30:20.837 08:51:42 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:20.837 08:51:42 ftl -- ftl/ftl.sh@20 -- # waitforlisten 93964 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@835 -- # '[' -z 93964 ']' 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.837 08:51:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:20.837 [2024-11-19 08:51:42.739948] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:30:20.837 [2024-11-19 08:51:42.740101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93964 ] 00:30:21.097 [2024-11-19 08:51:42.894449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.097 [2024-11-19 08:51:42.918861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.667 08:51:43 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.667 08:51:43 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:21.667 08:51:43 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:21.926 nvme0n1 00:30:21.926 08:51:43 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:21.926 08:51:43 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:21.926 08:51:43 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:22.187 08:51:43 ftl -- ftl/common.sh@28 -- # stores=4f03fc7a-4d98-4e95-ac8a-a29f09dca454 00:30:22.187 08:51:43 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:22.187 08:51:43 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f03fc7a-4d98-4e95-ac8a-a29f09dca454 00:30:22.447 08:51:44 ftl -- ftl/ftl.sh@23 -- # killprocess 93964 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 93964 ']' 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@958 -- # kill -0 93964 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@959 -- # uname 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93964 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.447 killing process with pid 93964 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93964' 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@973 -- # kill 93964 00:30:22.447 08:51:44 ftl -- common/autotest_common.sh@978 -- # wait 93964 00:30:22.707 08:51:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:23.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:23.277 Waiting for block devices as requested 00:30:23.277 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:23.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:23.537 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:23.537 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.878 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:28.878 08:51:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:28.878 Remove shared memory files 00:30:28.878 08:51:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:28.878 08:51:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:28.878 08:51:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:28.878 08:51:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:28.878 08:51:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:28.878 08:51:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:28.878 00:30:28.878 real 13m12.807s 00:30:28.878 user 15m25.171s 00:30:28.878 sys 1m28.715s 00:30:28.878 08:51:50 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.878 08:51:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 ************************************ 00:30:28.878 END TEST ftl 00:30:28.878 ************************************ 00:30:28.878 08:51:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:28.878 08:51:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:28.878 08:51:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:28.878 08:51:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:28.878 08:51:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:28.878 08:51:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:28.878 08:51:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:28.878 08:51:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:30:28.878 08:51:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:30:28.878 08:51:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:30:28.878 08:51:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.878 08:51:50 -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 08:51:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:30:28.878 08:51:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:30:28.878 08:51:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:30:28.878 08:51:50 -- common/autotest_common.sh@10 -- # set +x 00:30:30.790 INFO: APP EXITING 00:30:30.790 INFO: killing all VMs 00:30:30.790 INFO: killing vhost app 00:30:30.790 INFO: EXIT DONE 00:30:31.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:31.931 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:31.931 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:31.931 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:31.931 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:32.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.763 Cleaning 00:30:32.763 Removing: /var/run/dpdk/spdk0/config 00:30:32.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:32.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:32.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:32.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:32.763 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:32.763 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:33.023 Removing: /var/run/dpdk/spdk0 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70012 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70176 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70383 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70465 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70498 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70605 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70623 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70811 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70879 00:30:33.023 Removing: /var/run/dpdk/spdk_pid70964 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71064 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71150 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71186 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71217 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71293 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71405 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71846 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71899 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71951 00:30:33.023 Removing: /var/run/dpdk/spdk_pid71967 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72025 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72041 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72105 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72115 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72168 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72186 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72228 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72246 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72374 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72414 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72493 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72660 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72733 00:30:33.023 Removing: /var/run/dpdk/spdk_pid72764 00:30:33.023 Removing: /var/run/dpdk/spdk_pid73185 00:30:33.023 Removing: /var/run/dpdk/spdk_pid73272 00:30:33.023 Removing: /var/run/dpdk/spdk_pid73381 00:30:33.023 Removing: /var/run/dpdk/spdk_pid73423 00:30:33.023 Removing: /var/run/dpdk/spdk_pid73443 00:30:33.023 Removing: /var/run/dpdk/spdk_pid73527 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74141 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74172 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74638 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74726 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74830 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74872 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74897 00:30:33.023 Removing: /var/run/dpdk/spdk_pid74923 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76770 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76896 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76900 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76912 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76979 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76984 00:30:33.023 Removing: /var/run/dpdk/spdk_pid76996 00:30:33.284 Removing: /var/run/dpdk/spdk_pid77041 00:30:33.284 Removing: /var/run/dpdk/spdk_pid77045 00:30:33.284 Removing: /var/run/dpdk/spdk_pid77057 00:30:33.284 Removing: /var/run/dpdk/spdk_pid77108 00:30:33.284 Removing: /var/run/dpdk/spdk_pid77112 00:30:33.284 Removing: /var/run/dpdk/spdk_pid77124 00:30:33.284 Removing: /var/run/dpdk/spdk_pid78548 00:30:33.284 Removing: /var/run/dpdk/spdk_pid78634 00:30:33.284 Removing: /var/run/dpdk/spdk_pid80048 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81401 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81462 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81521 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81581 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81669 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81733 00:30:33.284 Removing: /var/run/dpdk/spdk_pid81871 00:30:33.284 Removing: /var/run/dpdk/spdk_pid82226 00:30:33.284 Removing: /var/run/dpdk/spdk_pid82251 00:30:33.284 Removing: /var/run/dpdk/spdk_pid82691 00:30:33.284 Removing: /var/run/dpdk/spdk_pid82865 00:30:33.284 Removing: /var/run/dpdk/spdk_pid82956 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83061 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83100 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83124 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83499 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83537 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83593 00:30:33.284 Removing: /var/run/dpdk/spdk_pid83966 00:30:33.284 Removing: /var/run/dpdk/spdk_pid84108 00:30:33.284 Removing: /var/run/dpdk/spdk_pid84905 00:30:33.284 Removing: /var/run/dpdk/spdk_pid85026 00:30:33.284 Removing: /var/run/dpdk/spdk_pid85197 00:30:33.284 Removing: /var/run/dpdk/spdk_pid85290 00:30:33.284 Removing: /var/run/dpdk/spdk_pid85640 00:30:33.284 Removing: /var/run/dpdk/spdk_pid85893 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86250 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86445 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86558 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86595 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86713 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86727 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86763 00:30:33.284 Removing: /var/run/dpdk/spdk_pid86949 00:30:33.284 Removing: /var/run/dpdk/spdk_pid87186 00:30:33.284 Removing: /var/run/dpdk/spdk_pid87620 00:30:33.284 Removing: /var/run/dpdk/spdk_pid88050 00:30:33.284 Removing: /var/run/dpdk/spdk_pid88515 00:30:33.284 Removing: /var/run/dpdk/spdk_pid88998 00:30:33.284 Removing: /var/run/dpdk/spdk_pid89144 00:30:33.284 Removing: /var/run/dpdk/spdk_pid89213 00:30:33.284 Removing: /var/run/dpdk/spdk_pid89794 00:30:33.284 Removing: /var/run/dpdk/spdk_pid89848 00:30:33.284 Removing: /var/run/dpdk/spdk_pid90303 00:30:33.284 Removing: /var/run/dpdk/spdk_pid90661 00:30:33.284 Removing: /var/run/dpdk/spdk_pid91158 00:30:33.284 Removing: /var/run/dpdk/spdk_pid91280 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91312 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91361 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91411 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91460 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91630 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91701 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91761 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91812 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91850 00:30:33.544 Removing: /var/run/dpdk/spdk_pid91902 00:30:33.544 Removing: /var/run/dpdk/spdk_pid92058 00:30:33.544 Removing: /var/run/dpdk/spdk_pid92295 00:30:33.544 Removing: /var/run/dpdk/spdk_pid92733 00:30:33.544 Removing: /var/run/dpdk/spdk_pid93108 00:30:33.544 Removing: /var/run/dpdk/spdk_pid93550 00:30:33.544 Removing: /var/run/dpdk/spdk_pid93964 00:30:33.544 Clean 00:30:33.544 08:51:55 -- common/autotest_common.sh@1453 -- # return 0 00:30:33.544 08:51:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:30:33.544 08:51:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.544 08:51:55 -- common/autotest_common.sh@10 -- # set +x 00:30:33.544 08:51:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:30:33.544 08:51:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.544 08:51:55 -- common/autotest_common.sh@10 -- # set +x 00:30:33.805 08:51:55 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:33.805 08:51:55 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:33.805 08:51:55 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:33.805 08:51:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:30:33.805 08:51:55 -- spdk/autotest.sh@398 -- # hostname 00:30:33.805 08:51:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:33.805 geninfo: WARNING: invalid characters removed from testname! 00:31:00.375 08:52:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:00.375 08:52:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:02.918 08:52:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:04.829 08:52:26 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:07.369 08:52:28 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:09.278 08:52:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:11.206 08:52:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:11.206 08:52:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:11.206 08:52:33 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:11.206 08:52:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:11.206 08:52:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:11.206 08:52:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:11.481 + [[ -n 6204 ]] 00:31:11.481 + sudo kill 6204 00:31:11.492 [Pipeline] } 00:31:11.511 [Pipeline] // timeout 00:31:11.516 [Pipeline] } 00:31:11.533 [Pipeline] // stage 00:31:11.539 [Pipeline] } 00:31:11.554 [Pipeline] // catchError 00:31:11.564 [Pipeline] stage 00:31:11.566 [Pipeline] { (Stop VM) 00:31:11.580 [Pipeline] sh 00:31:11.868 + vagrant halt 00:31:14.410 ==> default: Halting domain... 00:31:22.557 [Pipeline] sh 00:31:22.841 + vagrant destroy -f 00:31:25.383 ==> default: Removing domain... 00:31:25.657 [Pipeline] sh 00:31:25.944 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:25.955 [Pipeline] } 00:31:25.971 [Pipeline] // stage 00:31:25.977 [Pipeline] } 00:31:25.992 [Pipeline] // dir 00:31:25.998 [Pipeline] } 00:31:26.015 [Pipeline] // wrap 00:31:26.023 [Pipeline] } 00:31:26.035 [Pipeline] // catchError 00:31:26.045 [Pipeline] stage 00:31:26.048 [Pipeline] { (Epilogue) 00:31:26.061 [Pipeline] sh 00:31:26.347 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:31.641 [Pipeline] catchError 00:31:31.644 [Pipeline] { 00:31:31.658 [Pipeline] sh 00:31:31.954 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:31.954 Artifacts sizes are good 00:31:32.004 [Pipeline] } 00:31:32.020 [Pipeline] // catchError 00:31:32.032 [Pipeline] archiveArtifacts 00:31:32.039 Archiving artifacts 00:31:32.149 [Pipeline] cleanWs 00:31:32.161 [WS-CLEANUP] Deleting project workspace... 00:31:32.161 [WS-CLEANUP] Deferred wipeout is used... 00:31:32.168 [WS-CLEANUP] done 00:31:32.170 [Pipeline] } 00:31:32.186 [Pipeline] // stage 00:31:32.192 [Pipeline] } 00:31:32.206 [Pipeline] // node 00:31:32.211 [Pipeline] End of Pipeline 00:31:32.255 Finished: SUCCESS